• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Blue*Omega

Members
  • Content count

    256
  • Joined

  • Last visited

Community Reputation

150 Neutral

About Blue*Omega

  • Rank
    Member
  1. Quote:When you've said mixed heightmap/mesh system what do you mean? Exactly what you said. Create your basic heightmap using a greyscale image/math function/whatever, and then add static meshes to it, or use the meshes to replace part of the heightmap. One thing that I really liked about the UT2k3 implementation was that you could select quads from the heightmap and make them invisible, effectively creating big holes in your landscape, and then fill those holes in with a static mesh. (This allows for caves and such. Very nice!)
  2. First off: Thanks for the source tags! Makes things easier to read. Second: When I mentioned the lights, I asked if they were on, and if so to turn them OFF. Only for a moment, and simply for debug purposes. All you would really have to do is call glDisable(GL_LIGHTING) right before your draw call. I would recommend doing this until you have the quad showing up, THEN enabling lighting again and working from there. Always start with the minimal possible settings and work your way up. The less that's enabled the less that can go wrong. Also, once again just for debugging purpouses, try commenting out the following two lines: glEnableClientState(GL_NORMAL_ARRAY); glNormalPointer(GL_FLOAT, 0, m_normalArray); Then run the code and see if it makes a difference. Again, this is just starting with a minimal amount of information (Position and TexCoords) and then working your way up after you get it working. Sorry if you've already tried those, but it's not clear that you have.
  3. The problem is that unless you have a special plug in or are VERY careful, a 3ds file isn't going to be a nice regular grid of points, which is probably what your used to when it comes to landscaping. This can be both a blessing and a curse: It's great because you can make really any shape you want. Arches, cliffs, caves, whatever. The restrictions of a heightmap are gone! But with that also goes the convienience. Since geometry can literally be anywhere in the scene, there is no easy way to handel collision detection (which is tied very closely to camera positioning). Essentially, you would have to do a per-poly collision detection method. This is very slow unless you can properly break the scene into managable chunks using, say, a quad tree or similar form of scene management. This is actually, afaik, the approach Doom 3 takes. Drop in a mesh and it will handel the rest. Wonderfully convienient, but very difficult to code. Honestly, this seems like a bit much for a "little project". What exactly are you trying to achive? There may be a simpler way. (For example, UT2k3/4 uses a mixed heightmap/mesh system. That may be more fitting for what your looking for.)
  4. Did you say tha you did have lighting turned on? Have you tried switching it off and rendering? If the light or normals are configured wierd it could easily be drawing a black poly on the screen, which would make it effectively invisible. On a similar note, based on some really horrid lighting bugs in the past I always always ALWAYS clear the screen to a non-black color when I'm doing my testing/developing. This helps immensly, as black or really dark polys are visible. Favorite "debug" setting is: glClearColor(0.1f, 0.1f, 0.2f, 1.0f); Gives you a nice bluish-grey background that's not very distracting but helps the details show up.
  5. Ah! Thank you! Googling those gives me everything I need... So why was it so damn hard to find in the first place??? *sigh* Thank you again!
  6. This works for me: //Init stuff GLfloat Vertices[4][3]; GLfloat m_texcoordArray[4][2]; GLubyte m_indexArray[4]; Vertices[0][0] = 0.0f; Vertices[0][1]=0.0f; Vertices[0][2]= 0.0f; Vertices[1][0] = 1.0f; Vertices[1][1]=0.0f; Vertices[1][2]= 0.0f; Vertices[2][0] = 1.0f; Vertices[2][1]=1.0f; Vertices[2][2]= 0.0f; Vertices[3][0] = 0.0f; Vertices[3][1]=1.0f; Vertices[3][2]= 0.0f; m_texcoordArray[0][0]=0.0f; m_texcoordArray[0][1]=0.0f; m_texcoordArray[1][0]=1.0f; m_texcoordArray[1][1]=0.0f; m_texcoordArray[2][0]=1.0f; m_texcoordArray[2][1]=1.0f; m_texcoordArray[3][0]=0.0f; m_texcoordArray[3][1]=1.0f; m_indexArray[0]=0; m_indexArray[1]=1;m_indexArray[2]=2; m_indexArray[3]=3; //Later on, (in draw loop) //Set camera, bind texture, blah blah blah, then.... glVertexPointer(3, GL_FLOAT, 0, Vertices); glTexCoordPointer(2, GL_FLOAT, 0, m_texcoordArray); glDrawElements(GL_QUADS, 4, GL_UNSIGNED_BYTE, m_indexArray); This drew a small quad in the center of my screen with the current texture mapped to it. Note that I didn't even have to call glEnableClientState() on anything, but that may have just been me. I recommend trying it out with minimal data like this (no normals or anything) first, getting that to work, then building on it from there.
  7. I'm having a bit of trouble finding information about accepting input from a game controller like a joystick or gamepad (think PS2 controllers) without using DirectInput. I'd rather my code be as platform independant as possible (which DX obviously isn't) but I've still got DirectX code running just to take input from something other than a keyboard and mouse! What I'm looking for: A way of polling these devices reliably through, say, Win32 (which I'm sure I could find equivalant methods for on other OSes) OR A cross platform library that DOESN'T have a lot of graphics stuff tacked on. (SDL is out) Any suggestions?
  8. I think there's a slight mis-communication here. When most people think "render to texture" they think of creating a pBuffer/FBO and rendering directly to that. What it appears you're doing is rendering to your normal backbuffer, then copying the result to a texture with glCopyTexImage2D(), am I correct? I really don't have much experience with this issue, but I thought I could help clear up the confusion a bit. EDIT: a couple of question, now that I've looked at the code some more... Why exactly are you doing the glGetTexImage()? If all you want to do is use the texture you just copied, it's unnessicary. (If you want to manipulate that information it's good, which is why I ask) Also, you said that you're not calling glBegin()/glEnd()? Since this is simply a straight backbuffer render, you will need to do that SOMEWHERE in the draw code. (Unless you're using glDrawElements() or such, but it didn't sound like it) Also, what is the format of your back buffer? If it's 32 bits, you may need to change GL_RGB in glCopyTexImage2D() to GL_RGBA to account for the alpha channel. Hope that helps!
  9. OpenGL

    Actually, all of the components are used for bump mapping. This is simply a more exact way of doing it than your standard greyscale image (which I think is now reffered to as "emboss mapping"). The way I understand it (could be wrong) is that when using an old greyscale bump map the "bumpiness" is generated as such: -For each pixel a sampling is taken of the pixels around it. -Based on those samplings, a "slope" for the pixel is generated which essentially tells the program which direction the pixel is "faceing" (IE: a normal!) -That normal is used to compute the per-pixel lighting There are a couple of downsides to this method, though. Since it has to rely on the pixels around it for directional information, grayscale bumpmaps tend to look blurry close up, and can't have any nice sharp corners. Also, you have to provide a "depth" for the map in order to get the appropriate amount of bump. Normal maps nicely sidestep these problems by encoding an exact direction (normal) into each pixel, which is independant of those around it. Both of these methods are collectively known as "bump mapping" but the term is more commonly attached to normal mapping in respect to modern hardware. So, your forms of bump mapping are: Greyscale - Emboss Map - Blurry and Old. Color - Normal Map - Crisp and New!
  10. For the record, MD3 models (Quake 3) actually use the "morphing animation" method as well. There was an MD4 format later devloped for Quake 3 that used skeletal animation, but I'm not aware of how often or even if it was used. The MD5 models, however, that are used by Doom 3 are completely skeletal based, and actually have a very nice format once you get around a few quirks. (Plain text, animations are stored in seperate files, etc.) Just my $0.02
  11. Soooo..... after a good night of head bashing, punding the keyboard, and lots of muttered curses, I have discovered my problem! When calling gluPerspective() I was calculating the aspect ratio as so: float aspect = (float)( window.width / window.height ); This looked all well and good till I started stepping through the debug info and discovered... my aspect ration was coming out as 1.000000. Not good. So I shout some more obscenities and change the above code to: float aspect = ( (float)window.width / (float)window.height ); and NOW we get our nice happy aspect of 1.333333 (or 1.77777 if your doing widescreen, but whatever), cubes actually look like cubes now (was wondering about that), AND the coordinates I'm calculating for my frustum culling actually make sense in screen space now! So many problems caused by bad casting... Moral of the story: casting sucks. Thanks for your help, though, JohnBolton! [Edited by - Blue*Omega on April 26, 2005 9:30:58 AM]
  12. Okay, so it looks like my equation was "close, but not quite". So, a couple of questions: 1) When setting up your projection matrix with gluPerspective(), is the value of the Field of View the "full" feild of view (from one edge of the screen to the next) or the "half" field of view? (For example, entering 45 would give an actual 90 degree FOV) 2) Is that FOV value Degrees or Raidians? 3) Does OpenGL do any other transforms to the view on it's own? It seems like the values I end up with are just a small percentage off all the time... I'd appreciate any thoughts you might have! Thanks!
  13. Just want a confirmation that my math is correct (since I can't test it right now, I'm at work >_<) I'm working on an engine where the camera is always locked into looking down Z and the objects are always a certain distance from the camera (for the sake of simplicity just imagine that it's a sprite based engine.) Given these restrictions, I feel like I can safely reduce frustum culling down into a simple 2D bounding box test. I just need to calculate the "box" of the screen area and compare it against the "sprites" bounding box. Given a camera position (x, y, z) and a field of view (f) I think you would calculate the box as such: -z is effectively the distance from the drawing plane, so well call it d The frustum is, viewed from the side, an isometric triangle, so d splits it into two right triangles, with an angle of f adjacent to d. That would mean the side opposite of angle f is of a length d * tan(f), therefore the full legth of the visible plane is (d * tan(f)) * 2, with the other direction simply being that times the aspect ratio of the screen. Then of course you simply add x and y to those values to get the "real" box. This may be clearer: So let's say the camera is at (10, 5, -21) and the FOV is 25 degrees (yeah, that's an odd value, but 45 degrees isn't a good case test. I'll let you figure out why.) That means that half the screen width is 21*tan(25) = 9.79 Which means that half the screen height is 9.79 * 3/4 = 7.34 So our untranslated screen coords are (-9.79, -7.34) to (9.79, 7.34) Which translated would be (0.21, -2.34) to (19.79, 12.34) in "world space" Am I correct here? Just checking. (I'm really rusty on my trig) [Edited by - Blue*Omega on April 25, 2005 6:30:44 PM]
  14. NeHe's tutorials are great, but from what I gatehr he's not much of an author. Writing a stand alone tutorial and a full book are two very different things. Besides, he's got the CD that's apparently been selling rather well, and his website is really THE place for openGL beginners to go, so I don't think he's in a rush to gain a bigger audience.
  15. OpenGL

    Doom 3's "bump maps" are what most people reffer to as a "normal map" (the traditional grayscale map is a "height map") The idea behind a normal map is really quite simple in theory. For each vertex in a 3D scene you have a normal that points away from the surface (perpendicular most of the time), and from that you are able to calculate how well let a surface is. Normal mapping is the exact same thing but on a per-pixel level. Typically you will have an RGB image (Alpha can be used for special effects, but we won't worry about that) where each color component represents a component of your normal. So your Red value becomes the X normal component, Green becomes your Y component, and Blue your Z. (Which explains why most normal maps are a blueish tint. Surfaces usually point primarily "up", or along Z) By replacing the surface normal with normals sampled from this map, you end up with lighting information that varies on a per-pixel level, and gives the appearance of much more detailed geometry than is actually present. (Did that make ANY sense?)