Todo

Members
  • Content count

    395
  • Joined

  • Last visited

Community Reputation

451 Neutral

About Todo

  • Rank
    Member
  1. Vectors and I/O

    By definition the STL vector class stores its elements sequentually in a contigious block of memory, so &v[0] is defined behavior, although in this case I find its use subnormal.
  2. Blitz Basic programming?

    Quote:Original post by quickbot6 Also, I used to know a site with tons of OpenGL tutorials, but seem to have forgotten it. I believe that well-known site would be NeHe Productions (http://nehe.gamedev.net/)
  3. Hardware Shaders

    I never left :-) Let's just say I've been silently in awe...
  4. Roaming

    Quote:Original post by Ezbez Or you could randomly generate a single angle (usually measured from the positive X-axis going counter-clockwise) and make your asteroids travel in that direction. To do that, you'd simply use the cosine of the angle for X-velocity and sine of the angle for Y-velocity. Just make sure the angle is in radians (0 to 2*PI instead of 0 to 360), since C++ uses radians in it's cos() and sin() functions. Or perhaps you already know the trig stuff, but I decided I should be complete. This will be very similar to the above solution except that it will always give the asteroid the same speed. Endar's solution would have it move faster along the diagonals than along the horizontal or vertical. In Endar's defense, you could just normalize the resulting vector (and possibly scale it to a suitable speed thereafter).
  5. Quote:Original post by Kylotan No, it doesn't, because BMP files don't contain alpha channels. Try again with a PNG (and SDL_Image). Windows Bitmaps exist in a 32-bit flavour, but depending on both the program used to generate and/or save the image and SDL (which I don't know the details of), that fourth (alpha) channel might just as well be neglected.
  6. Quote:Original post by BUnzaga Ahh, so instead of use the camera, I just use my avatar position, then a forward vector as the target and some distance like 2-3 units... Thanks! Or, in favour of Captain P's idea: use the dot product of the view vector (your 'forward' vector) and the vector from the player to the item like so: if( dot( player.view, item.position - player.position ) > threshold ) item.use where threshold is the cosine of the angle of the 'cone of influence'. You could for example use threshold = cos( 30 degrees ).
  7. Baby steps: Where to begin?

    Hi, and welcome to this community. Quote:Original post by Audiodoc Now it just so happens, that for the types of games I want to make, this engine would provide a higher graphical fidelity, so that's great. However, is a C++ background required/preferred before learning UnrealScript? Also, I know Source SDK (which I have downloaded), comes with a variety of tools, Hammer, Cinemtography (for cutscenes, facial animation) and integrates with XSI etc... Does Unreal Engine 3 have the same capability? I want to be able to, in the end, create rich, cinematic games. Obviously this is a huge end goal, I'm just trying to start off right, with the right engine and background. Would it beneficial to learn c++, create a mod for source, and then transfer at a later time over the unreal engine? Can those skills learned in Source be applied to Unreal? a) The UnrealScript syntax is derived from Java and javascript, so while having a C++ background certainly doesn't hurt, you don't really need it to get to grips with US. The basics are quite intuitive, but US is an extensive language. It doesn't hurt that it's specifically tailored towards game development either. Have a look at the Unreal Developer Network (UDN), a fantastic resource for all things related to Unreal modding, including scripting. b) The Unreal Engine has a feature-complete SDK, just as Source does. Whereas Source's SDK is comprised of separate tools, UE combines them into one IDE, unsurprisingly called the Unreal Editor. It has built-in support for level & storyboard design, facial expressions, material and texture design and even a US editor. c) Everything you learn from one game can roughly be applied to most other games, provided you learned it right (as in 'correct') from the get go. Most developers/publishers don't stick with one genre anyway (don't put all your eggs in one basket I guess can apply here as well :)). If anything, trying and trying some more has never hurt anyone, and as you put it yourself, learning is an ever ongoing process. Having said that, good luck!
  8. Assuming the buildings are completely solid (opaque) and the trees are semi-transparent (translucent, not alpha-tested, see here-after) you first draw the buildings in any order (however, front-to-back would minimize overdraw when depth-test) and next, draw the trees back-to-front (depth-test, but no depth-write). In the case of alpha-tested imposters you don't even need to sort them, as you can simply draw them together with any solid geometry (i.e. in the same order and pass). Also, there are other, more modern techniques for efficient rendering of vegetation. I suggest looking up some (free online) chapters of the excellent GPU Gems series of books for example.
  9. What to export to for loading in game

    Here's the short answer: you - the artist - shouldn't have to worry about that. Ultimately, engine and/or tool programmers are the ones that need to worry about getting your work into their software. The gist of the longer answer comes down to this: modern development introduces the idea of a content pipeline that imports an artist's assets, processes those and outputs to a (most likely) proprietary format that the engine can use with minimal processing itself. This is the case with many modern engines to date, such as the Unreal Engine (for example). Another example are the Collada specs that function as an intermediate format (the combination of asset data and metadata to aid in the processing). A final example is the content pipeline paradigm in Microsoft's XNA platform.
  10. A common source of heavy aliasing for first-time shadow mappers is using linear filtering instead of point filtering for depth textures. Linear filtering will screw up the depth-test equation, resulting in more aliasing instead of less. Note that I realize you already know of the other, more prevelant cause of aliasing: finite texture resolution. There exist a ton of techniques to remedy that (all with their respective pros and cons).
  11. Rule Based AI

    As one of your trusted readers I could basicly quote everyone above this post (excluding you yourself of course :)). Having no one reply from the get go probably means they're still in awe (I know I am).
  12. Trash Cans!

    Awesome, and probably only getting better from here on :-). Cheers!
  13. Force Source File Compile In C++

    As I'm seeing it, he wants the other translation units to know of some class, so he thinks he should force them in the right order. My advice therefore would be to read up on header files and learn how to use them. I could be wrong altogether too.
  14. enums question

    How about this comparison: enum DaysOfTheWeek { Sunday, Monday, // etc. }; vs. const int Sunday = 0; const int Monday = 1; You'd have to give up auto-numbering, but you'll gain strong typing. Other than that, the only differences I can come up with have to do with the fact that an enumeration variable can typically only hold its own constants (compile-time checks), and integration with an IDE to allow for more robust auto-completion (you could always use tricks such as prefixing your constants of course). Any thoughts?
  15. Because pitch (m_pBitmap->pitch) and width (m_pBitmap->w) are two different things (although they could in some cases have the same value). The image pitch or stride is generally equal to the width of the image times the sum of the sizes of each of its channels, plus a constant to factor in padding. For example, a 256x256, 24 bits-per-pixel RGB-image would have a pitch of: pitch = width * number of channels * bytes per channel = 256 * 3 * 4 = 3072 (instead of width = 256)