• Advertisement

Bucket_Head

Member
  • Content count

    143
  • Joined

  • Last visited

Everything posted by Bucket_Head

  1. jumping stright to 3d

    Heh. Well Scott, I won't tell him what to do -- though I will offer him some advice. Species, I think the question at the heart of all of this is really whether you want to focus on 3D graphics or on making a game. The reason why people so often strongly recommend completing a 2D game first before taking on 3D is not because 3D graphics are so difficult in and of themselves -- it's because making a game is so involved otherwise. There are many different complicated components that make up a game, and careful design and thorough methods are required to bring everything together effectively. If things are handled poorly, you end up with a mess that's very hard to work with -- and that will make you tired and irritable, and likely bored. If there is no other impetus to work on it besides your own desire to do so (which is waning at this point), then you will leave it behind and take up another, more interesting new idea. This can lead you down a path of many half-finished projects, which is quite common among hobbyist game developers. The knowledge required to actually finish games is borne of discipline and forged with experience. It is a knowledge of rigorously enforced slow-and-steady methodology. Most anyone schooled in these truths can assure you that you want to minimize complexity when working with new, unexplored technology -- that is, if you're still figuring out what is involved in making a stable and complete game engine (and if you think graphics and input are it, you've a lot to still pick up grasshopper), doing that at the same time as figuring out what is involved in 3D graphics is indeed taking quite a massive bite for yourself, which you'll find difficult to chew. It's encouraged to do 2D first because, with the graphics so simple, it's a fine grounds to let the logical organization of a simple game engine framework come to fruition before upping the ante to 3D. It's building a backbone before you walk, basically...and it's much easier to do this than it is to arrange an effective game engine design around some quickily hacked up 3D code (which your first explorations into 3D graphics surely will be, as most everyone's are). If, however, you are less interested in making a game engine than in playing around in 3D graphics -- which I don't mean to insult; it can certainly be a lot of fun and a very worthwhile activity -- then by all means screw 2D and go to 3D. It's basically a question of what you want to explore and focus on first -- 3D graphics, or a proper game engine. You can best make that call because you know best about what you want to achieve.
  2. bullet time in a MMO

    Quote:Original post by Anonymous Poster A simple implementation for the propsal above would be to slow down the time for the player who starts its bullet time and let him act upon the saved state of the game. After some determined time the player's time has to be accelerate to catch up with current game state. This would allow the player to aim with more precision because other players are moving slower. The other players would see this as damage coming from the player even when they are already moved past his line of fire. (can also be seen during ping spikes, some older fps games can be tricked by forcing packet drops to make everyone stand still for a brief moment) The end result would be: -nice effects and easy targeting for the player involved -bad movement artifacts and lag like hits for others That still sounds pretty bad for the non-bullet time player; they see the bullet time player shooting in a different direction, yet they get hit? This would come off as a buggy lag-induced experience -- which is true enough, as we're inducing a lag in gameplay for them to produce this effect. However, we can still bend the bullet time player's aim so that they do in fact shoot and hit the non-bullet time player. We can pass this off as relativistic phenomena. If executed well enough, it would simply seem that the bullet time player aimed well and hit them. Quote:Original post by Shuger i found this by searching gamedev site: http://www.gamedev.net/community/forums/topic.asp?topic_id=333833&PageSize=25&WhichPage=1 if i remember right FEAR multiplayer uses bullet time effect, you can also check FEAR Combat which is the game multiplayer without single, i'm not sure but i heard it's free. This is an interesting paper so far (I'm about halfway through) and I recommend it for all interested in this thread. I like their idea of dealing straightforwardly in terms of space-time, and expressing game events in terms of communication between active (non-deterministic) entities through use of passive (deterministic) entities that travel over contours in space-time. These contours can be shaped as is most convenient to fit your lag and timewarping situation. Of course you very much need to distinguish between what actually happens (game state) and what you see (mere visualization). If we feel the need to visually indicate signs of discrepency between the two representations, we can do so via relativistic effects, such as the bullet-time blur effect and possibly through use of brightness or color to give more information as to the nature of the discrepency.
  3. bullet time in a MMO

    I like Anonymous/Haphazardlynamed's idea re: dead reckoning hacks. You could increase the speed at which you estimate they could have traveled, and render a blur effect as you move the character -- so that as you get corrections, you'll get very fast blurry motions (from the non-bullet time player's view) which are conducive to the effect seen in the Matrix movies. The "Time Grenade" idea also sounds like an innovative weapon idea that would be fun to play with. Of course (?) I don't see this idea working in an MMO so much as in a game with networks of a standard FPS scope (ie maybe up to 32 players in a single game). A thought that I had was allowing players to be at differing points in time. Basically, you'd all start off at a synced clock time, but due to various actions (including invoking bullet time) you could throw your times out of sync. Playing on oliii's idea of limited time travel, it would be incredible if the game was automatically logging players' motions with timestamps, so that if you happened upon a player who was a few seconds ahead of you, your copy of the game would be playing their actions back, so you'd be able to get the drop on them. The game would notice the discrepency in the two players' current times, and might throw one player into bullet time to let the other one catch up. So what you'd get then is one player would have a moment of advantage on the other player (while the other's recorded motions are played), but then the advantage would be reversed as the formerly disadvantaged player now gets some bullet-time to catch up. This could add an element of strategy involved; do you want to keep your clock a bit ahead or a bit behind? Still, it might seem that the player who's ahead might have the disadvantage -- if the player with the earlier clock can do enough damage in the time they're allowed, then the player with the later clock doesn't even have a chance to fight back. To give them a chance, the later player might see a "time disjoint" version of the earlier player, manifesting as a bullet-time-effected player, which might give them a chance to realize thay hey, this might be a time to go back a bit in time a bit to even things out. If they do this soon enough, they can even up the advantage sooner, so that they'd be on an even par again -- or they could go a little bit further back, causing a little bit of a crazy fourth-dimensional cat-and-mouse game (while energy lasts). Hmm...What I'm seeing now is a game in which you can change your rate of time forward and backward, where deviations from standard time rate cost power, and achieving deviations that are further from standard time position cost exponentially more power, which will keep players fluctuating between a nice, controllable range. Aha, and perhaps when you move forward in time at an accelerated pace, you regenerate power a bit faster? This might give players an incentive to go a little bit ahead of time, so that they'll have more power, yet will be more susceptable to someone getting the drop on them. If they're really fast, they can use their power to reverse the drop -- they'll likely have more of it, so they could win a war of attrition...if they're skilled enough. Ha, I really like where this is going. In short though, when things get out of whack, you could basically fudge it a bit and let things get a bit crazy and as long as it's awesome and there's some sort of internal consistency that players can become adept at exploiting, there will be devout fans who love it. If someone were to run with the disjoint time idea, the bullet-time effect can be used as a good cover for fudging correcting discrepencies in general.
  4. pac man....

    Quote:Original post by dwarfsoft Quote:Original post by da madface and, if i want do something illegal? If you talk about it here then you get banned. We don't like thieves who rip off other peoples works. It sounds to me like this is a problem only if the software/data is not open source. Madface, are you familiar with open source software? The basic idea (as I understand it) is, you're free to use and modify open source software, as long as you present your modified software likewise as open source -- so you can't modify it and refuse to give it away, essentially. If you're cool with that, as it sounds like you would be, then it's just a matter of finding a Pac-Man clone that you like that is open source. Then you can modify its data as you like, and present your own modified version as open source itself. Open source software tends to be licensed in some way -- the details of what you're getting yourself into (nothing bad, but you just want to be aware of what your obligations may be and what you're not allowed to do) will be in the license the software is released under, which (hopefully) will be provided along with the software when you download it. Good luck!
  5. pac man....

    It seems most people here think madface wants to make his own Pac-Man game with his own sprites, but that's not what I see here. It sounds to me like he just wants to end up with a Pac-Man game whose sprites he can control. Madface, why not just google to find and download some homemade Pac-Man clones (there must be dozens out there at least), find one that loads in bitmap files for the sprites, and then go edit those yourself? You might even be able to find one that reads in levels from text files, so you could edit those too. I think sometimes we get a little overzealous in pushing people down the game programming path, when it's not necessarily what people are asking for... Sneftel, you had said he can't edit Pac-Man data -- I'm wondering if you're referring to copyrights? If he's simply making modifications for personal use, does that violate copyrights any more than making a clone of the game would?
  6. Help!

    Scott, first I'll take a moment to stress something that you've probably already heard before, but that I feel obliged to say anyway. Yes you're right that making an MMOG is not the greatest of ideas -- and I'll tell you why. You're young, you're learning, and you're doing this as a hobby in your spare time. Maintaining an MMOG though, if it's at all successful, is a constant job. It's not something you can do just as a hobby in your spare time if it's going to stay up -- it's going to take money, in the end (your money, since it's your project) to pay people to keep it up, or even just to feed yourself while you keep the game up. It's really just not a good investment of your time; you'll sap all your time and energy on one project that you will get tired of, when you would have learned and grown so much more and had much more fun doing several smaller games instead. If you scale the scope of your projects small enough that you can finish them by the time you get tired of working on them (and you will), then what you've done is accomplished something to its full extent -- all the while learning and staying enthused. Switching between multiple, varied projects will also stretch your skills in myriad directions, so you'll become a more experienced, well-rounded hobbyist game developer. It's just a better thing to do. To that end, and I urge this, do small games. Do tiny games. Make Tetris. Make arcade games like Pac Man or racer games or fighting games. Hell, make freaking Mega Man or Mario spin-offs. Have you seen Gish? That game is incredibly fun -- and it's a simple platformer (albeit with great physics). You can even do small-scale multiplayer games like Diablo-scale, where anyone can set up their own small server that's not assumed to be dedicated and people can have their own small, fun sessions -- that'll be more fun to work on for a hobbyist like yourself than a full-blown MMO. If you want ideas, take a critical, analytical look at anything you do in your life that would make fun. See an interesting science fiction movie with a nice premise? "Hey, I wonder how that would work as a game...just a simple arcade game where you need to escape-the-whatever would be great!" etc etc. You can also take a look around at other indie games people have made, to see what sort of scope other people are setting for themselves. Zombie City Tactics is another neat indie game that graphically is quite simple, yet has nice tactical game play that can be challenging and entertaining. So, in short, consider taking on the challenging of finding a core element of fun, and formulate a tiny kernel of a game idea just around that -- and don't expand too far past that for the final game design. In my opinion, that's what makes for the best games anyway; simple, fun, great. Make those Scott, and I'll play 'em. I don't really care for MMOs anyway.
  7. Artists, programmers, and free projects

    I can attest that this difference in attitudes towards work between artist and programmer extends beyond the hobbyist sphere and into the professional. In my time having worked professionally as a game programmer (a little over a year now), time and time again it is the programmers that stay late to work further while the artists go home at a time (say 6 PM) that would be considered standard outside of the game development industry. While sure, it could be argued that the nature of the programmer's work tends to simply demand more time (programs will have bugs that require fixing...but if the art looks good, it's done), and that that is the main reason why programmers stay late, I believe there's a more fundamental issue at play here. In short, in my opinion, the programmer is more passionate (read: more obsessive) about the work. This is not simply a job for them, this is their life -- they live and breathe the code. For the artist, however, it is more simply a job, and most definitely not their lives; artists do tend to be more "normal" and less "obsessive" about their work than programmers in my experience, and have social lives that tend to be more socially acceptable outside of the game industry. Considering this theory, it seems to make sense that the programmer would want to stay late, to continue doing the work they love, while the artist would be more likely to scoff at the idea of working "unreasonable" hours for (what is likely) meager pay. The artist would be more apt to quit under bad work conditions, whereas the excited, driven, and naive programmer is simply happy to be there and is likely to put up with whatever he has to so that he can be allowed to program games professionally. I believe that, for better or worse, these are elements of the commonly accepted culture that we live in, and that we all (to at least some degree) subconsciously buy into and so perceive as normal and expect, and that we ourselves perpetuate (knowingly or not). Consequently, this will naturally carry over into the hobbyist sphere as well, where the artist doesn't want to waste his time on work he's not getting paid for, though the programmer sees the act of coding as an end in itself. I realize that my analysis has painted the programmers as the real liberal artists and the artists as the more vulgar, illiberal wage-oriented folks. I do want to say that I don't mean offense by it, as artists can be very idealistic and driven people as well, and that there are always plenty of cases that break the mold. Still, I feel pretty confident about this, and I'm curious about what an artist's response would be.
  8. OpenGL Viewing Camera Problem

    No, don't do what Anonymous says, heh. Here's the deal. You start with x increasing to the right, y increasing upwards, and z increasing out of the screen, towards you -- right? Normally, you might come to interpret this as saying that x is right, y is up, and z is forward/backward -- and increasing backwards, which seems strange. This is not the only way to interpret x, y, and z. Take a step back and remember -- they're just axes. The directions they refer to can be whatever you want them to be. All you have to do is just say that x increases eastward, y increases northward, and z increases upward (which I think is what you want) and bam! You start by looking down on the scene from above, instead of starting by looking forward. In this you achieved your goals, just be redefining what x, y, and z mean for you. From there, all you need to is render all of your geometry with this reference frame in mind. You'll probably want to lift your camera upwards (in the +z direction) to get a better view of your scene, and you may decide to rotate (pitch backwards by rotating about x) to look forward instead of downward, but that's your business. I hope this helps. Good luck!
  9. Getting into the gaming industry.

    Hey there, I've been programming games professionally for about a year now, and it's definitely fun, rewarding work. I'll be happy to draw on my experience so far to respond to several of your questions. Quote:Original post by solinent In a typical team of game developers do some developers code and others model, or do they do both. Obviously I should learn both, but I was wondering how a typical game environment works. Typically there are a few distinct roles in game development, and a given developer will adhere strictly to one of those. You'll have programmers that do not do art, and artists that do not do code. It's also common to have sound and/or music people that focus solely on creating their content files, and sometimes a group of artists will be subdivided into 2D pixel artists, 3D modelers, and animators. You may also have people devoted to creating levels, or to writing scripts used by compiled code to change data in-game (such a position is sometimes given to programming interns). This all of course varies dramatically on the size of your team -- indie projects may involve a small handful of people that each wear many hats, whereas professional development teams of twenty or more people that can each afford greater specialization. With more distinct roles, it's also helpful to have people that can speak enough of everybody's language to help coordinate people -- which is where roles like producers, who (though they may not be considered developers in the strict sense) are indeed absolutely essential, come in to help make sure that nobody's waiting indefinitely on somebody else and that the project moves forward smoothly at a steady pace. So in answer to your question, actually, it's not at all required that you learn both. Becoming familiar with a variety of disciplines can always come in handy because it can help you be more effective at communicating with co-workers in other disciplines, and so you can better understand how to work with them towards solutions, but it's not required that you master every single trade. You're far more useful in this industry if you do have a specialty -- so keep that in mind. Quote:Original post by solinent I am 14, and I know web development and C++ programming and visual basic programming and windows programming, but I am just getting started in DirectX or OpenGL. I found a bunch of tutorials, and I was wondering how things are controlled. Is it all effects and environments are made in the model program, and they are loaded and made interactive with code? And only when certain objects need to be changed the programmer has to alter them? Because I'm using Blender for modelling and it has many features to make models and textures and such. But DirectX has similar features (Like adding transparency to a cube in DirectX, or adding it at model time and just importing it?) As you say, typically you have various data files that are read in by the program at runtime and displayed and interacted with as according to the code. Nowadays many developers try to do their best to make their projects "data-driven" meaning that more and more of how the game works is described in data files rather than explicitly in the program code itself. The program code is arranged mainly to know how to interpret the data files, and how to do what they describe. This makes the game very flexible and easy to alter, and less error prone; in general, the simpler and more generalized you can make the program code, the less likely you will have convoluted errors in very specific, hard to track down cases (which tend to require obscene hacks to fix when you're down-to-the-wire and need to ship the product soon). The more abstract you make things, the easier they will be to work with -- and offloading work from programmers to level designers is a pretty good win also. Quote:Original post by solinent I'm only in grade 10, and I am just getting started and I would like to know since the industry is progressing so fast, is there even any point in learning online (I am constantly finding old documents and such, right now I'm using tutorials from directtutorial.com which is making all the tutorials still. (so they are new). It's always good to keep learning -- just take it all in, everywhere you look. You're right that the industry is progressing fast, and it's very true that many things will be radically different by the time that you make it to a time when you are ready to join the industry. However, there's that old saying that "the more things change, the more they stay the same" which definitely applies here. There is a good lot of fundamental theoretical knowledge that all of our technology is based on, and this is stuff that's not going to go away -- new technologies are just going to be more interesting and refined developments of the same fundamental truths. School (universities etc) is a great place to get firmly grounded in this theory, and to develop the abstract reasoning skills that will help you maintain the adaptability to pick up any sort of new technology as it comes and run with it -- and even to combine differing ideas with some of your own and come up with new technology yourself. It's been said (and I agree) that the university is a place to "learn how to learn" in a way that you hadn't before -- so it's definitely worthwhile (as long as you put your all into it) to just keep learning, everywhere you look. Quote:Original post by solinent Anything I should know to get me into the gaming industry? Also, what do I need to do in university to get into the gaming industry? I'm more interested in what kinds of universitys offer gaming programs, or whether I would just have to learn the seperate parts and combine them myself. I am very critical of these gaming programs being offered at universities nowadays. Sure they may provide the job training that works as a jump-start to attaining skills used in the industry right now, but my concern is that they may skimp on the depth needed to be able to think all the more outside-the-box and roll with the punches as things continue to progress and change in the industry. I encourage you to go for a more traditional computer science degree (if you decide to go the programmer route) which, though it may not be as specific as game programming, will grant you a profound knowledge that is quite applicable to game programming. That being said, though, one of the most important classes I took when I was at a uni was a class in software methodology -- a less theoretical and more practical course. It dealt with working in teams (typically programming assignments in class are totally done individually) and coordinating goals together, gauging progress, and formally and explicitly declaring ahead of time what exactly it was that's to be made (extremely important to do for any nontrivial project). Do be sure that you look into this area (how to go about working together to effectively get the work done) in addition to exploring the details of the work itself. As for actually getting in, the absolutely most effective way is to know somebody on the inside who can recommend you (or directly hire you if they're in a position to do so). For my first shot at professional game development, I was hired by an old friend to work on a Nintendo DS game -- and after getting experience on that, I was able to land my current job (and I've helped gotten two of my friends hired here so far). Short of that, you're going to have to develop some decently impressive work to show off, and present that in some sort of portfolio along with a resume, and hope that that gets you an interview -- at which point, if you're pleasant and seem to know your stuff, and aren't up against others that seem much more impressive, you'll probably have a very good shot at getting hired. Good luck on working your way in -- it'll take years of involved study, but if you keep at it, you'll get where you want to be.
  10. How do links work?

    Quote:Original post by lordcorm What i meen is this: If i created to differnt files (mainframe and monster) how whould i make it so that i can link the monster file to the mainframe and get the monster function and use it in the mainframe Ah okay. Well, for this, we need to remember a few things: mainly, we need to be able to differentiate between definitions and declarations, and we need to remember that each source file is compiled from scratch separately. For the first part, a definition is actually where you specify that you want to reserve a spot for something to be created definitely, whereas a declaration merely indicates that the thing is defined somewhere, without actually defining the thing. (You don't necessarily need both; a definition by itself works fine as a declaration, too). You may wonder why you need this distinction, but it does come in handy in many cases. If you're defining two things (say, two functions) that each need to call the other, you may be in trouble, since the definition of one necessarily has to go before the other. Before the compiler has read through the definition of the second, it won't know what you're talking about when you try to call it from the first, and so will generate a compiler error -- an "unidentified identifier." However, if you simply add a declaration of the second function up above the definition of the first function, then the compiler will have an idea of what you're talking about when you try calling the as-yet-to-be-defined function, and things will go smoothly. What I mean by the second part is, when the compiler compiles each of your .cpp files, it does so without being aware in the slightest than any of the others exist. This is why you need to have the same set of #includes or whatever at the top of each one -- the slate is wiped clean with each new file, so every new thing you want to work with needs to be redeclared. Now we can start to see how declarations allow you to lay things out in different files, but so that they can still reference each other. Just make sure that when the compiler compiles a file, everything that's referenced has either been defined or declared within "sight" of the file (either in it itself or via #includes) so that all identifiers will be understood. The other thing that's required is that whatever's being referenced (be it a function, global variable instance, or whatever) can be resolved in the linking stage -- if you declare something that isn't actually defined anywhere, you'll compile just fine, but linking will fail. Likewise, if you define it in more than one object, when you try to link, you'll get errors due to redundant definitions. The last thing I'll comment is a matter of style and code hygiene. Suppose you have a function defined in one source file that you reference in a dozen other source files. You could copy and paste the function prototype (which serves to declare the function's existence) into each of these source files, and that would work fine. However, if you were to then, say, add another argument to the function's argument list, and forget to change all of the prototypes, you'd get linking errors due to things not matching up -- and you'd manually have to look through and find the problem and fix it. That can be a hassle! A better solution is to put the declaraction in a single header file, and have all of the source files that want to reference it #include that. This way you only need to make the change in one place, and then the compiler will be able to point out all the erroneous spots where you forgot to add the extra argument where you made the call, complete with line numbers -- which is much better than the linker will do, which is basically just tell you "oh, linking error...this symbol is undefined in this object. Don't look at me; you wrote it." Quote:Original post by Oggan just want to let u know that wasnt a waste of time writing all that for him, cuz i read it and learned from it =) Glad you found it worthwhile.
  11. How do links work?

    I'm not quite sure what you're asking -- are you asking about the linker? As you may have noticed when compiling programs, your compiler generates some intermediary files called object files that probably have file extensions like .o or .obj. These are binary files that are much more machine-readable than the C++ source code files you write, but they're still not the finished product. Generally speaking, a compiler will work on each .cpp source file independently and (provided there were no errors in doing so) it will generate from the source file an output object file. At that stage the compiler is done, and the linker will take over. The linker takes these object files and links them together into a final output executable file. In this manner, you can work on a single project that is composed of many source files, and still manage to combine them together into a single program. There are several advantages to this strategy of organization. If you're using an intelligent make system, then modifying a single source file will only require that you recompile that single file (instead of all of the source files in the project, since files that haven't changed would just generate the same object files that are already there) and relink the object files in order to build your program. Also, you can decide to build a library instead of an executable as your output, and libraries can in turn be linked directly in with the object files of other programs. This is how you are hooking in standard library calls like for printing and working with files -- these calls aren't syntactically speaking a part of the language itself; rather, they're amongst a set of standard libraries that are included in standard distributions of C++ compilers. Your intuition about adding complicated systems (like a health or medicine system, sure) to a program via the linker is correct -- you would want to write code in separate source files, keeping things organized, and compile separate object files, and link things together to complete the build process. Here is a wikipedia article for more information: http://en.wikipedia.org/wiki/Linker As for how to actually get things linking at the command line, it varies by compiler. I recommend that you read through your compiler documentation or ask around (asking about a specific C++ compiler) to find your answer.
  12. Spaceship Control Problems

    Hey, if you feel that something is beyond your ability, that's a good time to start extending your ability. With this sort of problem, I find it useful to try looking from different angles (so to speak), including looking at simpler versions of the problem. Consider the 2D case -- much simpler, I know, but as we'll see, we can draw some very useful wisdom from it. As I'm sure you know, you can represent a two-dimensional rotation as a simple number, an angle, typically in radians or degrees, with 0 pointing right and increasing counter-clockwise. You can just as well represent a two-dimensional rotation as a rotated reference frame -- a pair of orthogonal (perpendicular) unit vectors that form a basis. Let's picture these vectors in our minds as arrows pointing out from the origin. You start with a vector pointing along the X axis, with coordinates (1, 0), and another vector pointing along the Y axis, with coordinates (0, 1). If we apply a rotation of 90 degrees, we get a new reference frame, with the first vector rotated to become (0, 1) and the second vector rotated to become (-1, 0). A handy way to visualize this is by taking the thumb and forefinger on your left hand, forming an L, and to tip it counter-clockwise 90 degrees. Now, as you may know (and if you don't, you can see that it is true with trial and error) the coordinates of the first "x" vector, as it is rotated into a new reference frame by some angle theta, follow a pattern -- the coordinates will be at (cos theta, sin theta). The coordinates of the second "y" vector are like those of the first, but 90 degrees out of phase ahead of the "x" vector -- with coordinates at (-sin theta, cos theta). More information about this phenomenon can be found at http://en.wikipedia.org/wiki/Unit_circle. Next, take a look at a 2-dimensional rotation matrix: [cos(theta) -sin(theta)] [sin(theta) cos(theta)] Hmmmm...say, aren't those the coordinates of the "x" vector down the first column, and the coordinates of the "y" vector down the second column? By jove, it is! I'm not going to delve into a full explanation of just how this works, though if you spend about ten minutes thinking about it and tooling around with a pencil and paper (and considering the definitions of sine and cosine, and they shape of their wave patterns and such) you'll hopefully arrive at a fairly intuitive understanding that you can work with. But in any case, you can do the same thing with vectors in 3-dimensional space. Any arbitrary rotation in 3-D space can be represented as a 3x3 matrix with three mutually orthogonal unit-length vector columns -- and these would be the "x," "y," and "z" vectors, exactly as they are when rotated into place. Very straightforward, isn't it? You could simply maintain one of these matrices to represent ship orientation, and whenever you want to change the ship's orientation, you can simply multiply this matrix by a matrix representing the small rotation change, and the result is the rotated matrix. A few notes though -- first and foremost, beware of precision errors accruing! Over time, your vectors might wear down and change to non-unit length, or stop being so orthogonal. There are steps you can take to fight against this though -- think about it. An alternative is to explore the crazy world of quaternions which are much smaller (a mere 4 floats vs the 9 you'd need for a 3-D rotation matrix), which are less prone to precision error wear and tear, and which multiply together much faster than matrices do -- however, they are more difficult to comprehend. Still, if you're interested, you can start by following a similar path to what's outlined above, and begin by exploring the 2-dimensional version of quaternions, called complex vectors or even complex numbers. Even besides all that, many people have utilized quaternions without understanding them at all, simple working with them as a black box, and using them as a drop-in replacement for rotation matrices. Anyways, more food for thought.
  13. 2D Surface Rotation Question

    Ol' Hap (aka Anonymous) speaks truth. I'll explain it using an analogy. Basically, you're performing a coordinate transformation here. You've got the coordinate system of your source image, and the rotated coordinate system of your destination image. Your algorithm, as implemented, is: for each pixel in the source image, ...transform its x,y coordinate to a rotated x',y' coordinate ...copy the color from source@(x,y) to destination@(x',y') But consider that you're working with floating point values when you do your transformations, but casting to int when you do your final plotting. When you cast to int, ofcourse you lose precision -- and you're not even rounding here; you're slamming the float value you had to the next int closest to zero (if the int > 0, you're taking the floor; if it's < 0, you're taking the ceiling). But nevermind that; ask yourself first why you are so sure that your algorithm will not "miss" any x',y' pixels in the destination image (as it most clearly is). Certainly, not all transformations, when implemented in this for-each-source-pixel manner, will catch every pixel in the destination image. Let's say, hypothetically, that instead of performing a rotation you were performing a scale. Now, if you're scaling to a smaller size, then as you iterate over all pixels in the source image, inevitably multiple pairs of x,y coordinates in the source will map to the same x',y' pair in the destination -- and you will redundantly place the same pixel coordinate a few times, depending on how extreme the scale is. Conversely, if you're scaling to a larger image size, there will be some x',y' coordinate pairs that no x,y pair maps to. This is clear from the fact that there are simply more pixels in the destination image than there are in the source image -- and so, since we're only plotting as many pixels as there are in the source image, there must be several that we're missing (forming stretches of "missed" pixels between any "hit" pixels, again depending on the extremity of the scale). With a bit more thought of how to circumvent this problem, we can see that if we instead choose to iterate through each of the pixels in the destination image, and for each x',y', do an inverse mapping to get at the corresponding x,y in the source image, and then copy the color value over, that we can rest assured that every pixel coordinate in the destination image is getting set. This will then work regardless of how big we might want to scale our destination image relative to our source image -- we won't have any of those bothersome gaps between pixels. This also has the added bonus that if we try to scale to a smaller image, we don't end up hitting destination pixels redundantly -- each destination pixel is hit no more and no less than once, which is exactly what we need. No holes and no extra processing -- perfect, ne? A similar problem with the missing pixels is occuring here, due to rounding errors (as mentioned above). The simple solution is to reverse your algorithm, and have it iterate over the dimensions of the destination image (rather than the source), and to start with those coordinates and perform the transformation backwards to get at the source image, and then copy that over -- pixel by pixel. That way you can be sure that each pixel in the destination image is getting set, since you've processed each and every pixel. I hope this helps, and works for you. Good luck.
  14. Actually, I just realized that it's entirely possible to partition but not have either of the new subgraphs be closed (if the enemy has touched the insides of both sides, foiling our nefarious scheme) or to even partition into three pieces instead of two, and to have (I think as many as) two of the new subgraphs be closed...so keep that in mind.
  15. You are correct in that identifying a closed region of neutral hexes bounded by a contiguous wall of hexes of your color and the wall at the edge of the board requires some sort of breadth-first search. However, in thinking about various situations the game board may come to, I realize that there is some ambiguity in what cases you would consider "closed" (and thus resulting in the simultaneous earning of several hexes). Now, let's consider the hex board to be a graph, with each hex as a vertex with an edge to adjacent hexes. It may prove convenient to allow an outer edge of "wall" hexes forming a border around the board, or it may prove more convenient to simply say that the graph ends with the neutral hexes at the edge of the board. My first thought was that any connected region of neutral hexes with bounding edges to hexes that are either the given player's color or walls would be considered closed and thus takeable -- however, if this were so, our very first move would claim the whole board as our own. You could add a stipulation that you must have made at least two moves, which would have allowed the other player a chance to put one down and thus give you something to work against, but I still don't feel that this is getting at the heart of the problem. Also, suppose we had a situation wherein a player plotted a closed circle (with a hole in the middle) somewhere in the middle of the board. Would this area in the middle of the circle be considered closed in our sense, meaning that the neutral hexes in the middle are considered takeable? Or must the connected region be bounded by both the player's color AND wall? While I could be wrong, in reflecting on what I imagine to be the sort of gameplay that you're going for, the answer would be yes -- that the region in the circle would be considered closed and takeable. If that's so, I think what's really at the heart of the situation here is ultimately a matter of partitioning the game board. If we take the set of neutral hexes as a pool from which to draw vertices, we can determine from these the connected components of the board (being the board's "maximal connected subgraphs" (wikipedia)). Really, the moment at which you will claim territory (en masse) is the moment when you claim a hex on the board that ends up partitioning one of these connected components into two pieces -- and then at that time, the subgraph piece that you take is the one that satisfies our earlier condition of (put most simply) not neighboring any of the opponent's hexes. What we need next, then, is being able to determine when placing a hex partitions a connected component, and then which components are enclosed -- and then ideally to be able to do so relatively efficiently. I'd like to recommend a technique that I happened upon in my A* implementation. As I expect is usual, I had a few fields per vertex in the world for use in the A* algorithm, for marking what I'd visited and so on. Rather than clearing the board after I'd completed a search (so it'd be ready for the next one), I simply numbered the searches, so that as I visited a particular vertex upon a particular search I would leave the number of that search on that vertex. Then with the next search would have a new number (I'd just increment), and since none of the vertices would be labeled with the number of the current search, they were thus cleared for all intents and purposes. If you use such a technique, you can also retain information about which (breadth-first) search spanned a particular vertex (you'd want to not stop abruptly when you detect an opponent's hex adjacent to a visited vertex, but keep going -- just don't visit any non-neutral hexes when searching) and so use this to know which set of vertices are in the same connected component (they'd all have the same search ID). The algorithm would go something like this (with every move taken): - remember which search ID we're at at the beginning of this move - make the move - while there exists a neutral hex adjacent to the spot just moved to whose search ID is less than the current search ID... ...increment the current search ID, ...perform a breadth-first search (w/ current search ID) on said hex When we're done, when we examine the difference between the search ID we'd then be at (after running the above algorithm) and the search ID we were at beforehand, we'd know how many connected components there were. We could compare this to the number of connected components there were before we made the move, and if this number is greater, then we must have partitioned one of the components, and so it'd then be time to claim some bonus territory. If, at the time we perform a search, we track whether we ever come up against against a neighboring hex of the opponent's color, and associate that information with the search ID, then when we realized we'd partitioned a component we'd be able to quickly look that information up (we'd only need to remember for searches for the current move, which really shouldn't have to be more than...three, I think). Then it'd just a matter of iterating over the hexes that match that search ID and coloring them in, woohoo! That should be fast and hopefully seems straightforward. EDIT: Anonymous never forgives. Changed "strongly connected" to "connected" since (as Anonymous points out) "strongly connected" only applies to directed graphs. [Edited by - Bucket_Head on July 14, 2006 4:36:45 PM]
  16. Trouble Orienting an airplane in a flight sim

    You may not be aware, but OpenGL internally represents all transformations (such as translations, rotations, and scales) as 4x4 matrices. There is a very straightforward procedure for multiplying a matrix by a vector (automatically performed each time you specify a vector in OpenGL), which will yield a transformed result vector. In our OpenGL program, we begin with the identity (do-nothing) matrix (when we glLoadIdentity()), and from there, each subsequent call to glTranslatef or glRotatef or what-have-you will automatically generate a new matrix to accomplish the specified transformation, and multiply it by the currently set matrix, to replace it. The result is the concatenation of the transformations, meaning that the multiplied transformations will be performed in order -- but rather than have to multiply each vertex by a series of matrices, we can just multiply it by a single one, which will perform all the transformations specified so far. So you see it's both flexible and efficient. What your original code had been doing was performing two sequential rotations, which would never be able to achieve your goal. The simple fact of the matter is, one rotation and then another just isn't an adequate model of reality -- that's not what's going on. You just have one orientation, and that's it. In order to meet your goal, you'd need to do it in a single rotation, which haphazardlynamed's technique achieves by manually specifing the rows and columns of the transformation matrix. I won't at this time go into full explanatory details (it's late), but I will say that you'll want to study linear algebra, which is essentially the mathematics of matrices and vectors in depth. (BTW I can see that you already know some about linear algebra due to your awareness of vectors and cross products, so by no means do I mean to belittle your awareness, however I know all too well how much of this information can be picked up on-the-fly rather than studied in depth. To study in depth, it's linear algebra all the way.) Study how a transformation matrix can contain basis vectors for a reference frame (noting how the identity matrix contains the three cardinal vectors of a default reference frame), and from there you may begin to see how specifying the vectors of a rotated reference frame (your relative forward, up, and side vectors) can produce a matrix that transforms vectors into that rotated reference frame. Now that you know some terminology, wikipedia and google are your friends. Good luck in your journey. PS -- you can also accomplish this using gl's gluLookAt (http://www.hmug.org/man/3/gluLookAt.php), in which you can essentially supply a location for the camera, a point to look towards (essentially the same info as a "forward" vector), and an up direction (GL will figure the right direction from the cross product), and will generate this same matrix for you. If you toy with that you should be able to get the same result.
  17. Normal of a non-convex poly

    For the purposes of finding the normal, you can find a convex bounding polygon and take the normal of that. Alternatively, there may be some way to use only convex polygons that you should look into. I think your first question to answer is either "how do you know if your polygon is concave?" or to find a way to wind a series of vertices into a polygon with a method that will drop inner vertices...I don't have a ready answer for you in this regard, but I think that's a good direction to look down.
  18. Text Adventure help

    dvang, you don't necessarily need very much. For each object you want to have in the game, start by typing one of these for each one: class NameOfObject { }; Write these with the most basic items at the top, and the most complicated/involved items (those most likely to build on the more basic items) towards the bottom. In general, it's a safe practice to create an object for each type of "thing" you may need in the game world; in your case, one for a Room, one for an Object, likely one for the Player or a given Character, and possibly also one for a Connection (or whatever you want to call it -- maybe Door? I like Door) between Rooms. Then, start adding to each class the elements it may need. Most things will probably want some kind of text description (so that when you look at them, you'll have something to display) -- many here would I'm sure recommend a std::string for storing this. Also, any given Room will need to have some lists of things -- probably a list of Objects and a list of Doors, to start. A good class to use for listing things is a std::vector<YourTypeHere>. You'll want to start adding methods to things too, so you can interact with them. Running with the Door metaphor, you might give it an open() function, which would transport you to whatever's on the other end of a door...which you might indicate with some sort of reference to another Room -- perhaps a pointer, or an index into a big array of Rooms. However you want to do it is good. As for building all this, you'll probably want to provide some calls for adding to Rooms and establishing Doors to other Rooms and so on. The coolest thing would be if you could build all this from a file or set of files, but for that you'd need a parser, which is some work, but which will be well-worth it as far as ease of content generation down the road goes. I know this isn't a cookie-cutter solution, but I hope it gets you thinking. Let us know if you want more help once you've got more details ironed out.
  19. Women's Studies

    Quote:Original post by MasterWorks Quote:Original post by Nathan Baum Mainstream society presents the image that promiscuous women are bad and wrong, whilst promiscuous men are lovable rogues. (Of course, it doesn't say that's the image it presents.) Maybe, but how do you know that what society presents isn't in turn based on biology? (I think the cause and effect of lots of things like this can be VERY difficult to discern... you can compare different cultures but women, unfortunately, tend to have varying degrees of inferior status all around the world.) As templewulf pointed out, in days of yore when food was scarce weight implied wealth, so the most portly were the most beautiful. What body type is the most attractive also varies by culture in the modern day, from the artifically elongated necks of certain African tribes to the Japanese-formalized Lolita Complex to the human Barbie dolls we know and love in the United States and similar lands, it really varies...and unless presented with sufficiently convincing evidence to the contrary, occam's razor points to the difference in culture, nurture over nature, as having such a profound influence over variance in what is deemed sexy. Also, to throw this into the mix -- I could be wrong about this, but I've heard that one reason that the tall, skinny model figure became so prominent a fixture was because homosexual clothing designers wanted women that didn't have figures too horribly different from their own to model their designs, so that they could easily enough fit into them too for fun cross-dressing escapades. Clearly these women would need to on average be taller than the average woman is (since the average man is likewise taller) and would need to not be especially curvy. So entered the tall, skinny models we know today -- who soon became idealized visions of feminine physical perfection for many people, due to their position in the spotlight. I find it particularly ironic for the especially uncurvy women to be held as the more sexy, since researchers have determined that those with the most prominent hip-to-waist ratios (of about .65 or so) are the most fertile women on the planet. Quote:Original post by Eelco Quote:Original post by Nathan Baum Quote:Original post by Silvermyst I'd say a woman is more likely to be better qualified for the position, just like a man is more likely to be better qualified to be a firefighter. But just like I'd rather be rescued from a fire by a strong, capable woman than by a skinny, clumsy man, I'd like to think that proponents of Women Studies departments would rather have a smart, capable man to head the department than a not quite as smart, not quite as capable woman. I would lambast you for expressing such a blatantly sexist opinion, but I'm not entirely convinced you aren't being ironic. In which case, nicely done. I would lambast you for expressing such a blatant denial of facts, but I'm not entirely convinced you aren't being ironic. In which case, nicely done. I would lambast you for expressing such a blatant and failed attempt at intelligent irony, but I'm not entirely convinced you aren't being ironic. In which case, nicely done.
  20. Women's Studies

    My first Lounge post...may God help me. :P I've a few cents worth to throw into the mix. First, with regards to who's wearing the pants, I say screw all that -- no pants! Second, I understand the stereotype with regards to males having a much more powerful and persistent sex drive than women, and I might be an exception, but my experience does not support that. I've had some close female friends, one of which in particular goes on and on at length about her sexual desires or just how much she needs sex constantly and misses it so when her boyfriend is gone, practically without end, as if it is the end all be all of her existence -- and I have never heard any of my male friends mention sex or desire very much at all. Perhaps its just that our society trains men not to talk to each other openly much, but even with male friends I become mutually very open with, it just doesn't tend to come up as a topic. Now with regards to the main theme of the topic at hand, I've the distinct impression that many people (men and women, and even self-proclaimed woman feminists) misinterpret what feminism is all about -- much in the same way many self-proclaimed christians misinterpret what Christianity is all about. Maybe I'm talking out my ass since I admittedly haven't read much feminist literature, but my understanding was that feminism was really a movement for equal rights and opportunity for the sexes, based on the premise that the status quo was that women are subordinated. I agree with the basic premise, and think that such a movement is a good idea and that there's a lot of room for improvement -- and indeed, that a lot of ground has now been covered. I do think there's still some work to be done, and though I do hear about feministic theories or arguments that I don't agree with, that doesn't mean I'm going to lambast the whole movement as without worth. This may sound weird, but though it's not a formalized organization in the same way, I sort of see the feminist movement as working like a union for women. With regards to nature and nurture, it's not true that small children, even though they themselves may not be aware of differences between the sexes, have not been treated differently by their parents or otherwise the community around them because of their sex. A study meaning to cut away to help answer the question of how much nature and nurture influence a person's gender roles needs another approach...I might push for studying males raised as women and women raised as men, though this phenomenon is probably too rare to find and certainly shouldn't be artificially constructed for ethical reasons. I understand and agree with arguments that if a Frenchman can teach English and so on, why not let a man lead the Women's Studies department? It certainly sounds reasonable in that light, yes...but if you do start to see the feminist movement operating like a union for women, and the Women's Studies department as a sort of formalized extension of that, you start to wonder if it really is appropriate for a man to lead. If I were a dock worker, I wouldn't want anyone but another dock worker as the head of my union, that's for sure. It's certainly a touchy subject, but one that I agree should be talked about -- and if events like this appointment help bring such topics up, then that's at least one thing it's got going for it. [Edited by - Bucket_Head on November 20, 2005 9:36:57 PM]
  21. Logic problem

    Squirm and Dymytry are correct -- the (initial) solution space is in fact of size 24, since there are 12 balls x (solid or hollow) = 24. Each query can (at most) cut the solution space down to a third of what it was previously. Note that even with this solution space of size 24 rather than 12, the ceiling of log-base-3 of 24 is still no more than 3 -- so theoretically we may be able to do it in three tries. The initial query, placing four balls on one scale with four on the other and the remaining four off to the side, will necessarily cut the solution space in thirds. If the scale reads balanced, then we know we need only consider the remaining four balls, each either hollow or solid for all we then know, meaning 8 remaining possibilities out of our original 24 -- one-third. If the scale tips one way or the other, then we know immediately we can ignore the four balls, and the eight possibilities they care altogether with them. We also know that the heavier side could not contain a hollow ball (another four possibilities removed) and that the lighter side could not contain a solid ball (another four possibilities removed). This brings us to a total of 16 possibilities removed, with a remaining 8 of our original 24 -- again, one-third.
  22. Logic problem

    The common mistake with this problem (which I've seen before) is to think of a weighing as a binary operation -- with two possible results, that the left is heavier or the right is heavier. This would allow each weighing to cut the space of possible answers (commonly referred to as our search space) in half, meaning you'd need the ceiling of log-base-2 of 12 = 4 weighings to get your answer. In truth, however, each weighing is a trinary operation -- it cuts the search space in thirds. This may not make sense at first, but it does when you consider that you don't have to put everything on the scale. If you put one-third of the remaining possibilities on one side, and another third on the other side, with the remaining third off to the side, we recognize three possible results -- that one side is heavier, or the other, or that they're the same. This possibility that they're the same would indicate that our answer lies somewhere in the group that we had off to the side. Theoretically, since we have a trinary operation to cut our search space in thirds, we may be able to do this at best with ceiling of log-base-3 of 12 = 3.
  23. 2D Blood

    Quote:Original post by darkzerox Bucket_head... unfortunately im not understanding exactly how this space-subdividing thing works. im picturing some kind of large grid, of say 50x50 pixels (representing the different areas).. and each particle has a pointer to this area its in....... Basically, the idea is to overlay a grid that has some granularity between having each cell be the whole world in size and...well, there's really no theoretical lower-bound, but in your game you probably wouldn't want to go any finer than a pixel. What you're picturing of subdividing the world into 50x50 pixel squares sounds like a fine way to do it. The main idea is for culling regions to search or process; instead of searching through all the other particles for positions when doing, say, collision testing (this general idea is applicable for organizing any arbitrary objects in space, and can easily enough be extended to 3D) we need only search those that are within the same genral vicinity as the one we're currently processing. For the heirarchical way of doing this, think of trees (the kind studied in data structures). You could take any region of space, representable by any particular node in the tree, and subdivide it into four sub-regions (in 2D -- in 3D it'd be eight) and continue. This can be more flexible than the 50-pixel-square method, since we might decide not to go any further down a side of the tree if there were no blood pixels at all in it, thus affording us the extra storage room to go into finer detail (if we so choose) by having smaller and smaller regions of space our nodes represent (up to some reasonable limit, where we stop). We still have our blood particles point up into the (leaf) node representing the region they're in (again to be clear each region has a collection of pointers to all the blood particles in it), and we probably want to give each node a pointer up the tree to its parent so we'll have an easy way to naviagate around for if we want to pluck a blood pointer out of its area and up and over and down into some other area. One thing you might even do is keep a count of the number of particles in an area and subdivide it if it passes some threshold. In this way we can keep a tight control on the number of comparisons we need to perform between particles. Anyway, you might have heard of these 2D structures before -- they're called quad-trees (and the 3D extensions are called oct-trees). If I've misunderstood, you may not even need them for this application, but in any case they're good things to know about and employ when they do come in handy. Quote:Original post by darkzerox okay, so then i could start iterating from a nearby location rather than from the start... however, isnt that similar to having say a pointer to the particle one up, and one left of the current particle? I suppose so, though this area grouping may be easier to maintain, and will likely need to be changed less often. Giving each particle direct pointers to adjacent blood particles may also be a good idea though... Quote:Original post by darkzerox as for shaders.. the thought of putting x/y velocites encoded in the RGB values did come to mind.. before having heard of such textures. however, this was only meant as a space-saving technique.. since my map already had RGB values... (well, im using Uint32's now... but thats irrelevant). however... having only two values, each only up to 255, kind of limits me.. actually this thought just came to mind now.. if i use say G and B combined to make one value... i could have 256^2 values.. and then divide by say 100, to get decimals.. yada yada.. You're on the right track -- that's the basic idea. Of course, with this method, there is going to be a trade off with precision, depending on the color depth of your textures. What it boils down to, since (as I understand it) this is entirely a graphical effect, is the question, "Does it look good (enough)?" Quote:Original post by darkzerox anyways, this whole concept of "shaders" is completely new to me. for the awhile i was completely baffled by why you would render it 9 times off screen.. but.. thinking about it now... if you render it 9 times, each with an offset of one, 3 right, and 3 down.. and it "blends" their RGB values.. you would get the new momenta.. like you said... thats a pretty crazy idea. but sounds like itd be fast and requires fewer calculations.. hmm. actually would probably work a lot better too... Yep, it's extremely fast in hardware. It's basically exploiting what exactly modern 3D hardware does so well. Quote:Original post by darkzerox ya.. you wouldnt happen to know if SDL_Surface's are considerd "textures" or are processed by the graphics card, would you? (using SDL_BlitSurface... their doc doesnt mention anything about it) I'm afraid not. SDL itself has no direct support for 3D graphics, itself providing a 2D graphics API, and thus makes no claims at exploiting the programmability of modern graphics architectures. In order to make use of pixel/fragment shaders, you're going to have to make use of a 3D graphics API such as DirectGraphics (of Microsoft's DirectX) or OpenGL (my personal favorite). The good news is that SDL provides fairly painless hookups for using such a 3D graphics API on top of it (particularly OpenGL, which will also work cross-platform)...but the bad news, of course, is that this may require learning a whole new API (and even language, for shaders) to do what it is you want to do. Indeed this may be very worthwhile to study in the long run, but the amount of studying required for accomplishing just this one task would make me doubt the worth of this approach if you do want to finish this thing up soon. Quote:Original post by darkzerox well, the people playing my game wouldnt really *know* how i'm solving for momenta, would they? and i dont really understand why they would care, unless they are hard-core game programmers... Why I bring up that you might still want to provide a software implementation isn't because hardcore game programmers may be concerned about the method you're using, but simply because some people's video cards aren't new enough to support hardware shaders -- so the question is whether you want them to be able to play your game or not. Typically one of the first things you do in a program when loading a shader dynamically is querying whether the user's hardware can actually handle a shader, and only after passing that test do you prepare it for execution. Quote:Original post by darkzerox maybe i can just leave the blood in a texture and forget about arrays and vectors altogether.. (still torn between the two. with vectors i'd have to use those pointers ive now been told are bad, but arrays dont have "reserve" and such..) Re: vectors vs arrays, vectors are essentially equivalent to arrays, except they have slightly more overhead to them that allows you to do more things. If you've ever programmed a resizable array structure yourself, well, that's basically exactly what a vector is. It's just pre-prepared for you so you don't have to write it yourself, that's all. Quote:Original post by darkzerox -- one more thing, where might i find a shader, to blend two SDL_Surface's? :S (if anyone knows) Again, you won't be able to. SDL isn't the path to shaders. I wish you luck though, and remember that 3D graphics APIs can certainly be used for 2D, so if you look down that route, you don't have to worry necessarily about all the 3D math to accomplish what it is you want.
  24. 2D Blood

    darkzerox, you seem to be thinking there has to be "one true way" of organizing your blood. This isn't true though; it's very possible to have the actual blood particles allocated once in memory, but have multiple data structures that have (redundant) pointers into this one blood pool. It may be that for different purposes, different sorts of structures would be more efficient -- and there's nothing keeping you from using multiple such structures. The only thing to make sure is that you keep them all these various data structures correct, and that when you clean up, you only actually delete/free the actual blood particles once. That being said, a vector might be a good structure for your actual blood pool, since you can quickly and easily make it grow, and push new active particles onto the end. Arrays (and vectors, being essentially fancy arrays) are naturally going to be much faster to iterate through since they'll require less memory dereferences to navigate than linked lists, and like (I think it was) Superpig said about them being arranged in the same location in memory (since arrays are organized sequentially in memory). Basically working with memory is one of the slowest things a computer can do, so as you might imagine hardware designing folks have all kinds of funky cacheing schemes to try to improve speed, and these often make assumptions about programs working with memory that's all generally in the same region. Working with vectors/arrays exploits this and is best for performance. For blood particles to sense nearby blood particles, I'd suggest having some kind of space-subdividing structure that cuts space down into sizes (meaning to whatever fine precision) that you like (experiment with a very different sizes and see what works best for you) and then have these space groups refer to all the blood particles within the area. Give each of the blood particles in the pool a pointer up into the area it's in (and of course be sure to update this as particles move) so that from any particle you can get at its nearby particles very quickly and easily. This space-subdividing structure can be heirarchical (so if blood particles are only in the upper left side of the screen, you can quickly cull out having to consider 3/4s of the world) if you think it would be best. At some level you might want to just have a set grid of some decided precision, which again of course doesn't have to have cells be pixel-sized. Experiment and profile and see what kind of combination works best for you. And with regards to pixel/fragment shaders, the idea there of course is taking advantage of the fact that graphics cards have been tweaked out to process floats very quickly, and that especially now that they're fully programmable, we can exploit this to process things besides just graphics. You'd just have to store your blood particle data in textures, being essentially a graphics API format for a 2D array of pixels (where you could have color component values representing position or other values), and then write shader code to process these textures in whatever way you want. One thing you might try is getting an image of all your blood particles floating out there in space, but where each blood pixel is drawn, instead of a red pixel color have an rgb color that in some coded way represents the xy momentum of the blood particle at that spot, and maybe also representing whether there's even a blood particle at that spot or not. Render this to an offscreen surface 9 times (all combinations of being offset by up to a pixel in the x and y dimensions) with a shader that, for each pixel, collects the momentums rendered to it and combines them, outputting a new pixel representing the resulting momentum. You see that this would do the job of updating each blood particle based on its neighbors' momenta, but lets the video card handle it, freeing up the CPU for other processing. You can even use another shador that takes this texture representation of the blood particles and renders it by simply outputting red when it sees represented that there is a blood particle at that spot, ignoring the momentum value. In this way it's already in a drawable format, so you wouldn't have to iterate over the blood to draw it, and in fact the only direct interaction the CPU would have to take with the blood would be plotting new blood particle momentum values into this texture when now blood particles were created. The shader way sounds really neat to me, but it's strange technology that people playing your game may or may not support (though as time goes on, it'll become more common place). It's worth looking into, though you may want to encode a software implementation as a backup anyway. Cheers, and good luck -- do keep us posted. I'd like to play your game, once I am finally all done with school. :P
  25. Robot Combat

    I'm afraid I can't direct you to any resources offhand, but I do have a few ideas. Basically, I see this as a matter of positioning, and it may be convenient to have groups of bots work together. For example, you might try to have them attack enemies from different sides, so that someone is more likely to attack the player in the back. Rather than go from exactly opposite sides though, which would seem to lead to the enemy dodging out of the way so the bots stupidly shoot each other, you probably want to have them attack at some fixed degree angle apart from each other in orientation (not 180 degrees, as head-on from opposite sides would be, but maybe somewhere between 90 and that). You also might think in terms of using explosive charge attacks (like grenades) put in positions to try to ward the enemy into moving into lines of friendly fire. I think flocking will be good for this, having the AI think in terms of flanking the enemy by fanning out and coming at them from various sides at agreed orientation angle displacements, and directing fire to try to herd the enemy into positions most conducive to the group dealing damage.
  • Advertisement