Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

122 Neutral

About Megaboz

  • Rank
  1. Megaboz

    Wierd Glitch

    You haven't really posted enough code to go by, but guessing at what's there I would say you're either not setting the values in your array properly somewhere, or you're indexing it/checking it improperly. It's difficult to help more without knowing what xPos/yPos/wizzy_width/wizzy_height, 32, your array, etc represent.
  2. Megaboz

    Re: Flight Sim rotational problem

    If I'm reading you correctly, you're suffering from Gimbal lock since you're rotating an axis at a time - making your second rotation on a different axis than the one you're intending it to be, since that axis is relative to the new rotation, not the original. There's no good way to overcome this using Euler angles, an axis at a time, if you want all degrees of freedom. As has been suggested, you need to perform all rotations by picking the correct arbitrary axis and rotating on it (axis-angle). A lot of people accomplish this by using quaternions and converting it into an axis-angle. As long as you use them properly, quaternions will solve gimbal lock and give you your axis-angle. I'm far from an expert on them, but I have implemented them, there are a number of pages on gamedev that talk about quaternions, try searching around. Good luck :)
  3. Megaboz

    Slower with fewer polygons?

    That did indeed seem to fix the problem (I didn't think it would be a vsync thing), but two questions - 1. If I keep wait for vsync off, isn't it going to tear? Is there another solution that would fix it while keeping wait for vsync on? 2. Why would a fewer number of polygons cause it to go slower when wait for vsync is on? Why wouldn't it be 60FPS like when more polygons are on the screen? Thanks
  4. Hello - I ran into a strange problem today, I was hoping someone may have an idea of what's going on. In a nutshell - 1. First I loaded two meshes of asteroids and moved them around the screen. I was getting around 60FPS. 2. Then in addition I loaded a spaceship mesh and moved it around the screen (so now I had 3 meshes, all on screen, all textured, all being translated and rotated around). I was getting around 60FPS. 3. Then I took away the asteroids and left JUST the ship, but didn't change anything else. Suddenly the program was running choppy, jumping from 60FPS down to 40FPS and back up repeatedly. I thought maybe there was some non-graphics related thing I broke by taking the two asteroids out, so I replaced the vertices for the spaceship with another random mesh I had lying around. The framerate jumped back up - so it's actually the mesh that's causing the slowdown - but only when its the only mesh on the screen. The moment I put any other mesh on screen with it the frame rate is fine. For the hell of it, I tried two spaceship meshes on screen at the same time, and the frame rate dropped horribly. For some reason, when this mesh is the only mesh on the screen, it does bad. But if you put it up with another mesh, even if that mesh has like 4x the number of polygons, the framerate is great again. I got the spaceship mesh off a free model website, can someone think of a hardware or opengl reason why this would be happening? Also, I'm using l3ds to load the mesh. I'll include a link to the mesh if that helps. Thanks a lot. Space Fighter mesh (3ds)
  5. Megaboz

    Pointer Dereferencing Cost

    Quote:Original post by harnacks Well in response to both posts, its not that its too slow now, or that a single pointer dereferencing might be too slow. I've been toying around with a starcraft-like game for about 3 months now, just to brush up on my C++. Each playable 'object' in the game has between 4 and 8 pointers in it and i may have up to 300 of these in a game meaning i may have about 1200-2400 pointers active during the game loop, with 2400 pointers, would i notice a performance hit over static structures ? Honestly, as said here, I wouldn't really worry about it unless you specifically see an issue. Non-game related, but a large scale biological neural network emulator I've been working on for the last few years is EXTREMELY heavy on pointers and non-native (to C++) referencing structures alike. A sample simulation could have 10,000 neurons with 50 synapses per, a load of parallel-simulation housekeeping going on the same time, all this heavily dependent on dereferencing. It's not speedy, but the hit from pointers isn't anything more out of the ordinary, there are certainly more serious bottlenecks that affect it. So I would keep going with your plan. :)
  6. Megaboz

    Calculating vector from quaternion

    Quote:Original post by SiCrane Assuming you wanted your camera's rotation to be applied to the translation, you would transform the vector of translation by the camera's rotation quaternion. So it sounds like you would transfrom (0, 0, 1) by some quaternion q. This can be done by either transforming the quaternion into a matrix and then using that matrix to transform the translation vector. However, you can also skip the transformation of the quaternion into a rotation matrix by directly transforming the translation vector by the quaternion. The equation for that is vrotates = q * voriginal * q-1. Thanks for your responses, I managed to get it working with your help, it wasn't quite clear in my head until now. Thanks again!
  7. Megaboz

    Calculating vector from quaternion

    Sorry, I meant a single column matrix representing the vector, not a full 4x4 or 3x3 or whatever. I'm confused though about what you're saying - maybe if I gave more info about what I'm doing you could clear up the parts I'm misunderstanding - I'm representing my camera as a quaternion, which I implement as a class with 4 floats (w,x,y,z), some functions to normalize, multiply, convert to an axis-angle, etc. I have the rotations working great, after multiplying the original quat by the temp quat, I take the axis-angle of the new quaternion, and send that to a glRotate. So now let's say that I wanted to move the camera forward 1 unit in the Z direction - but I'm keeping track of world coordinates, so I don't just want to directly call a glTranslate, I want to know the translated coordinates. I know the relative vector of which way I want to go (1 unit in the z), I know my camera's current rotation via its quaternion, and I know the camera's current world coordinates - what series of calculations would I use to determine the camera's new coordinates? I'm assuming that once I know the vector of what direction I'm facing I can simply multiply each component by the magnitude, so that part's easy, I'm just not sure about actually getting that vector from my camera's rotation quaternion.
  8. Megaboz

    Calculating vector from quaternion

    So would I first convert my quaternion into a rotation matrix, and then multiply my original vector by that rotation matrix, the resulting matrix containing my new vector? Or am I totally off? I'm still a little fuzzy with the math.
  9. Hello - I'm currently using a quaternion, converted to an axis-angle, for rotations. This is probably an easy question, but using that quaternion, how can I calculate a 3d vector of the direction it's pointing? I don't fully understand quaternions, but I know everything I've read says the vector component of it cannot be treated as a regular 3D vector, so I'm assuming I can't just extract from that, I assume I need to apply some trigonometric functions first. Thanks for any help you can give!
  10. Megaboz

    Questions and advice

    Thanks vgsmart, dh000g, I appreciate your feedback. VGSmart - you make some good points about the job ruining the whole experience for you, I know I was much more keen on network administration before I had to deal with the crap that goes along with it for a number of years. Something to think about. Thanks again guys.
  11. Hello - I've been dealing with these questions for a few years now, but I've never really gotten any insight from people in the industry, so I hope you guys and gals could help me out a bit, it would mean a lot to me - Just about me to let you know where I'm coming from - I'm a 25 year old with a BS in Computer Science from the University of New Hampshire. My career so far has been a mix of network administration and programming - my title and duties are network administration, but I've also written a number of inhouse and contracted client/server applications, databases, utilities, etc. I am a very competent software engineer and computer scientist, I have done a large amount of work in the emulation of biological neural networks and other computational neuroengineering. I am starting to build up a portfolio of smaller 3d games and demos to demonstrate my abilities in those areas. Bottom line is, I love every aspect of games more than I can describe, not just playing them, but thinking about them, reading about them, studying algorithms, the math behind it, everything about them, and I know I can do the work. My two main questions are - one, going into the gaming industry from a non-gaming programming/network administration managerial level with a BS in CS - can I reasonably expect (region dependent) to make at least 45-50k? Or is that total pipe dream and coming from outside the industry would require me to start much lower? I want to get in more than anything, but I also can't survive with too large pay cut. Two - are all the jobs pretty much on the West Coast, especially CA and WA? I'm an East Coast kid and hesitant to move. Is this something I'm going to have to do if I really want to do it? (I would rather live West Coast than Texas, but if New York were an option that would be great. I'm currently about an hour outside of Boston). I really want to do this, but I have to weigh how much of a change to my life it would be to do it. Thanks for your time and insight.
  12. Megaboz


    Quote:Original post by Promag Well again I suggest using multiple index arrays. For instance, a cube have 8 vertices and 6 normals: vertices = [v0x v0y v0z v1x v1y v1z v2x v2y v2z v3x v3y v3z v4x v4y v4z v5x v5y v5z v6x v6y v6z v7x v7y v7z] normals = [n0x n0y n0z n1x n1y n1z n2x n2y n2z n3x n3y n3z n4x n4y n4z n5x n5y n5z] then we could specify two index arrays for each array (first triangle): vertex index = [0 1 2 ... ] normal index = [0 0 0 ... ] This way one could save memory on gpu. I can see only one problem, fast memory access... But the glDrawElements function only takes 1 array of indices, how can I specify both a coordinate index array and a normals index array? I was under the impression that you just gave it a single index array and it retrieved corrresponding values from every enabled list using those index values.
  13. Megaboz


    Thanks for the responses, sounds like I'll just have to include coordinate information 3 times for each vertex, I suppose it's not that big of a deal.
  14. Hi - I've seen a few issues on this topic but I haven't found a solution to my problem, hopefully someone can shed some light - I'm using glDrawElements to render polygons to the screen. It works well if used just for vertex coordinates, but what about if I want to include normal data, in which each vertex could have 3 normals associated, or other arrays (color, texture mapping, etc). Someone previously suggested putting 3 copies of each vertex's coordinates into the array, but then that defeats the purpose of using shared vertex data and glDrawElements - I could just use glDrawArrays. Is there a way to use glDrawElements to draw shared vertex information for coordinates but also allow to specify 3 values (per face) for color, texture, and normal data? Without making 2 more copies of the coordinate information and defeating having shared information? What is the standard way of rendering a high poly count model to the screen (eg one imported from a 3d package) Thanks!
  15. Megaboz

    Simple lighting woes

    It's okay - I think the question I really have is just my one about ambient/diffuse in general - if you set a light source to white (1,1,1) for both ambient and diffuse, and then set the material of an object to equally reflect both ambient and diffuse (regardless of the color, we'll just say red, 1,0,0), assuming normals are calculated correctly, when the object rotates should it show the proper shading or will the object look flat since it's reflecting the same amount of ambient as it is diffuse lighting? Also, I just found out why everything was very dim, I had glEnabled 2D Textures, and objects that had textures attached looked correct, but objects without 2D textures were REALLY dim. If I make sure it isn't processing 2D textures on objects without textures they look super bright again.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!