• Content count

  • Joined

  • Last visited

Community Reputation

122 Neutral

About SoD

  • Rank
  1. for simple scenes and collision detection it can work good but for anything more complicated (think about the foliage of a tree) is difficult to have the right depths. Maybe i have also to render a image of the background with Z-Buffer values and use it for its purpose? Something like this
  2. Hello, i'm looking for a way to create a (for now) simple adventure engine that can join prerendered backgrounds (2D images) with 3d objects (the characters, animations of interaction with items to collect,etc). Commercial examples are games like Final Fantasy 7, Alone in the Dark, Grim Fandango and so on. I'd like to ask if anyone can explain with more detail how i can make interact 2D and 3D realiably. First of all is depth sorting of objects rendered in the background (for example a table or a column) and the player moving on the screen. For this i was thinking about per pixel depth sorting: all the pixels of the table are of a Z value, if the player is deeper of that Z the table is drawn over it, if not i can see the 3d object first. Is that so simple? I mean, is only a question of setting the right Z values? Another issue linked to this problem is the borders the objects, if i show the table over the 3d object, the border around the table is (probably) a blend from it that the ground, not from the table and the 3d object. In that case i have to remove the table from the rendering and apply it like a sprite? is this better? Last is collision detection with the objects of the background (the same table or the walls): also in this case my idea is to prepare a background mask with pixel that can be walked on and other not. when i move the player i check collision with a little sphere with center in the "foot of the player" and the "collision mask" pixels. Not knowing the algorithms usually implemented in such games i hope my ideas are not to stupid :P I'm asking myself if these kind of techiniques are hardware friendly and if without 3d HW power they can work good also. Thank you for any support
  3. Hello, i'm coding an export script from a 3D modeler (Blender) to my own engine written in OpenGL. Every object in the modeler has its own position, rotation, scale. I've a little trouble converting those values because where OpenGL uses the Z axis for depth (more depth -> negative values, less depth -> positive values) Blender instead uses the Y one instead (more depth -> POSITIVE values, less depth -> NEGATIVE values). I've converted x,y,z positions passing them as x, z, -y to my engine. Scaling instead is passed as x, z, y My trouble is converting rotation values. In the engine i use quaternions for handling in an easier way the calculations. I use this math library I'd like to pass the Euler angles of the rotation instead of the 4 components of the quarternion that Blender can calculate itself because people can better understand the rotation reading the xml file "by eye". Having the 3 angles Blender rotate first the object on its X angle, after the Y and finally the Z. I've tried to pass the angles in the x, z, y order to my engine. The quaterion class can be set with yaw, pitch, rool angles. So first i've multipled my "zero" quaternion with a quaternion setted with only x (pitch) value. But when in Blender the object rotate in a sense, in OpenGL in the other one. If i add also the other values the result in unpredictable. I'd like to know if i've to recalculate the angles in some ways because the depth axis is reversed and different or what. Thanks :)