Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

150 Neutral

About Blue*Omega

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Blue*Omega


    Quote:When you've said mixed heightmap/mesh system what do you mean? Exactly what you said. Create your basic heightmap using a greyscale image/math function/whatever, and then add static meshes to it, or use the meshes to replace part of the heightmap. One thing that I really liked about the UT2k3 implementation was that you could select quads from the heightmap and make them invisible, effectively creating big holes in your landscape, and then fill those holes in with a static mesh. (This allows for caves and such. Very nice!)
  2. Blue*Omega

    draw quad with VAs-duh !

    First off: Thanks for the source tags! Makes things easier to read. Second: When I mentioned the lights, I asked if they were on, and if so to turn them OFF. Only for a moment, and simply for debug purposes. All you would really have to do is call glDisable(GL_LIGHTING) right before your draw call. I would recommend doing this until you have the quad showing up, THEN enabling lighting again and working from there. Always start with the minimal possible settings and work your way up. The less that's enabled the less that can go wrong. Also, once again just for debugging purpouses, try commenting out the following two lines: glEnableClientState(GL_NORMAL_ARRAY); glNormalPointer(GL_FLOAT, 0, m_normalArray); Then run the code and see if it makes a difference. Again, this is just starting with a minimal amount of information (Position and TexCoords) and then working your way up after you get it working. Sorry if you've already tried those, but it's not clear that you have.
  3. Blue*Omega


    The problem is that unless you have a special plug in or are VERY careful, a 3ds file isn't going to be a nice regular grid of points, which is probably what your used to when it comes to landscaping. This can be both a blessing and a curse: It's great because you can make really any shape you want. Arches, cliffs, caves, whatever. The restrictions of a heightmap are gone! But with that also goes the convienience. Since geometry can literally be anywhere in the scene, there is no easy way to handel collision detection (which is tied very closely to camera positioning). Essentially, you would have to do a per-poly collision detection method. This is very slow unless you can properly break the scene into managable chunks using, say, a quad tree or similar form of scene management. This is actually, afaik, the approach Doom 3 takes. Drop in a mesh and it will handel the rest. Wonderfully convienient, but very difficult to code. Honestly, this seems like a bit much for a "little project". What exactly are you trying to achive? There may be a simpler way. (For example, UT2k3/4 uses a mixed heightmap/mesh system. That may be more fitting for what your looking for.)
  4. Blue*Omega

    draw quad with VAs-duh !

    Did you say tha you did have lighting turned on? Have you tried switching it off and rendering? If the light or normals are configured wierd it could easily be drawing a black poly on the screen, which would make it effectively invisible. On a similar note, based on some really horrid lighting bugs in the past I always always ALWAYS clear the screen to a non-black color when I'm doing my testing/developing. This helps immensly, as black or really dark polys are visible. Favorite "debug" setting is: glClearColor(0.1f, 0.1f, 0.2f, 1.0f); Gives you a nice bluish-grey background that's not very distracting but helps the details show up.
  5. Blue*Omega

    Gamepad/Joystick without DirectX

    Ah! Thank you! Googling those gives me everything I need... So why was it so damn hard to find in the first place??? *sigh* Thank you again!
  6. Blue*Omega

    draw quad with VAs-duh !

    This works for me: //Init stuff GLfloat Vertices[4][3]; GLfloat m_texcoordArray[4][2]; GLubyte m_indexArray[4]; Vertices[0][0] = 0.0f; Vertices[0][1]=0.0f; Vertices[0][2]= 0.0f; Vertices[1][0] = 1.0f; Vertices[1][1]=0.0f; Vertices[1][2]= 0.0f; Vertices[2][0] = 1.0f; Vertices[2][1]=1.0f; Vertices[2][2]= 0.0f; Vertices[3][0] = 0.0f; Vertices[3][1]=1.0f; Vertices[3][2]= 0.0f; m_texcoordArray[0][0]=0.0f; m_texcoordArray[0][1]=0.0f; m_texcoordArray[1][0]=1.0f; m_texcoordArray[1][1]=0.0f; m_texcoordArray[2][0]=1.0f; m_texcoordArray[2][1]=1.0f; m_texcoordArray[3][0]=0.0f; m_texcoordArray[3][1]=1.0f; m_indexArray[0]=0; m_indexArray[1]=1;m_indexArray[2]=2; m_indexArray[3]=3; //Later on, (in draw loop) //Set camera, bind texture, blah blah blah, then.... glVertexPointer(3, GL_FLOAT, 0, Vertices); glTexCoordPointer(2, GL_FLOAT, 0, m_texcoordArray); glDrawElements(GL_QUADS, 4, GL_UNSIGNED_BYTE, m_indexArray); This drew a small quad in the center of my screen with the current texture mapped to it. Note that I didn't even have to call glEnableClientState() on anything, but that may have just been me. I recommend trying it out with minimal data like this (no normals or anything) first, getting that to work, then building on it from there.
  7. I'm having a bit of trouble finding information about accepting input from a game controller like a joystick or gamepad (think PS2 controllers) without using DirectInput. I'd rather my code be as platform independant as possible (which DX obviously isn't) but I've still got DirectX code running just to take input from something other than a keyboard and mouse! What I'm looking for: A way of polling these devices reliably through, say, Win32 (which I'm sure I could find equivalant methods for on other OSes) OR A cross platform library that DOESN'T have a lot of graphics stuff tacked on. (SDL is out) Any suggestions?
  8. Blue*Omega

    Render to texture issue

    I think there's a slight mis-communication here. When most people think "render to texture" they think of creating a pBuffer/FBO and rendering directly to that. What it appears you're doing is rendering to your normal backbuffer, then copying the result to a texture with glCopyTexImage2D(), am I correct? I really don't have much experience with this issue, but I thought I could help clear up the confusion a bit. EDIT: a couple of question, now that I've looked at the code some more... Why exactly are you doing the glGetTexImage()? If all you want to do is use the texture you just copied, it's unnessicary. (If you want to manipulate that information it's good, which is why I ask) Also, you said that you're not calling glBegin()/glEnd()? Since this is simply a straight backbuffer render, you will need to do that SOMEWHERE in the draw code. (Unless you're using glDrawElements() or such, but it didn't sound like it) Also, what is the format of your back buffer? If it's 32 bits, you may need to change GL_RGB in glCopyTexImage2D() to GL_RGBA to account for the alpha channel. Hope that helps!
  9. Blue*Omega

    Doom3 bump map format

    Actually, all of the components are used for bump mapping. This is simply a more exact way of doing it than your standard greyscale image (which I think is now reffered to as "emboss mapping"). The way I understand it (could be wrong) is that when using an old greyscale bump map the "bumpiness" is generated as such: -For each pixel a sampling is taken of the pixels around it. -Based on those samplings, a "slope" for the pixel is generated which essentially tells the program which direction the pixel is "faceing" (IE: a normal!) -That normal is used to compute the per-pixel lighting There are a couple of downsides to this method, though. Since it has to rely on the pixels around it for directional information, grayscale bumpmaps tend to look blurry close up, and can't have any nice sharp corners. Also, you have to provide a "depth" for the map in order to get the appropriate amount of bump. Normal maps nicely sidestep these problems by encoding an exact direction (normal) into each pixel, which is independant of those around it. Both of these methods are collectively known as "bump mapping" but the term is more commonly attached to normal mapping in respect to modern hardware. So, your forms of bump mapping are: Greyscale - Emboss Map - Blurry and Old. Color - Normal Map - Crisp and New!
  10. Blue*Omega

    how to handle animated vertices?

    For the record, MD3 models (Quake 3) actually use the "morphing animation" method as well. There was an MD4 format later devloped for Quake 3 that used skeletal animation, but I'm not aware of how often or even if it was used. The MD5 models, however, that are used by Doom 3 are completely skeletal based, and actually have a very nice format once you get around a few quirks. (Plain text, animations are stored in seperate files, etc.) Just my $0.02
  11. Blue*Omega

    Quick Trig Check (Frustum Culling)

    Soooo..... after a good night of head bashing, punding the keyboard, and lots of muttered curses, I have discovered my problem! When calling gluPerspective() I was calculating the aspect ratio as so: float aspect = (float)( window.width / window.height ); This looked all well and good till I started stepping through the debug info and discovered... my aspect ration was coming out as 1.000000. Not good. So I shout some more obscenities and change the above code to: float aspect = ( (float)window.width / (float)window.height ); and NOW we get our nice happy aspect of 1.333333 (or 1.77777 if your doing widescreen, but whatever), cubes actually look like cubes now (was wondering about that), AND the coordinates I'm calculating for my frustum culling actually make sense in screen space now! So many problems caused by bad casting... Moral of the story: casting sucks. Thanks for your help, though, JohnBolton! [Edited by - Blue*Omega on April 26, 2005 9:30:58 AM]
  12. Blue*Omega

    Quick Trig Check (Frustum Culling)

    Okay, so it looks like my equation was "close, but not quite". So, a couple of questions: 1) When setting up your projection matrix with gluPerspective(), is the value of the Field of View the "full" feild of view (from one edge of the screen to the next) or the "half" field of view? (For example, entering 45 would give an actual 90 degree FOV) 2) Is that FOV value Degrees or Raidians? 3) Does OpenGL do any other transforms to the view on it's own? It seems like the values I end up with are just a small percentage off all the time... I'd appreciate any thoughts you might have! Thanks!
  13. Just want a confirmation that my math is correct (since I can't test it right now, I'm at work >_<) I'm working on an engine where the camera is always locked into looking down Z and the objects are always a certain distance from the camera (for the sake of simplicity just imagine that it's a sprite based engine.) Given these restrictions, I feel like I can safely reduce frustum culling down into a simple 2D bounding box test. I just need to calculate the "box" of the screen area and compare it against the "sprites" bounding box. Given a camera position (x, y, z) and a field of view (f) I think you would calculate the box as such: -z is effectively the distance from the drawing plane, so well call it d The frustum is, viewed from the side, an isometric triangle, so d splits it into two right triangles, with an angle of f adjacent to d. That would mean the side opposite of angle f is of a length d * tan(f), therefore the full legth of the visible plane is (d * tan(f)) * 2, with the other direction simply being that times the aspect ratio of the screen. Then of course you simply add x and y to those values to get the "real" box. This may be clearer: So let's say the camera is at (10, 5, -21) and the FOV is 25 degrees (yeah, that's an odd value, but 45 degrees isn't a good case test. I'll let you figure out why.) That means that half the screen width is 21*tan(25) = 9.79 Which means that half the screen height is 9.79 * 3/4 = 7.34 So our untranslated screen coords are (-9.79, -7.34) to (9.79, 7.34) Which translated would be (0.21, -2.34) to (19.79, 12.34) in "world space" Am I correct here? Just checking. (I'm really rusty on my trig) [Edited by - Blue*Omega on April 25, 2005 6:30:44 PM]
  14. Blue*Omega

    NeHe book - why not?

    NeHe's tutorials are great, but from what I gatehr he's not much of an author. Writing a stand alone tutorial and a full book are two very different things. Besides, he's got the CD that's apparently been selling rather well, and his website is really THE place for openGL beginners to go, so I don't think he's in a rush to gain a bigger audience.
  15. Blue*Omega

    Doom3 bump map format

    Doom 3's "bump maps" are what most people reffer to as a "normal map" (the traditional grayscale map is a "height map") The idea behind a normal map is really quite simple in theory. For each vertex in a 3D scene you have a normal that points away from the surface (perpendicular most of the time), and from that you are able to calculate how well let a surface is. Normal mapping is the exact same thing but on a per-pixel level. Typically you will have an RGB image (Alpha can be used for special effects, but we won't worry about that) where each color component represents a component of your normal. So your Red value becomes the X normal component, Green becomes your Y component, and Blue your Z. (Which explains why most normal maps are a blueish tint. Surfaces usually point primarily "up", or along Z) By replacing the surface normal with normals sampled from this map, you end up with lighting information that varies on a per-pixel level, and gives the appearance of much more detailed geometry than is actually present. (Did that make ANY sense?)
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!