Jump to content
• Advertisement

# pilarsor

Member

33

130 Neutral

• Rank
Member
1. ## the plane and axes.

Thanks for the illustrations. First, if you want to compute a rotation matrix, that rotates points around the center of profile inside that plane, I would suggest, that you create a quaternion from an axis, angle representation. The axis of rotation you already have defined by your plane's normal vector. The angle you can compute like the following ( using 3D coordinates ): vec3 v0 = RotPlane - centerOfProfile; vec3 v1 = RotPlane' - centerOfProfile; v0.normalize(); v1.normalize(); float angle = acosf( dotProd<v0,v1> ); ( = acosf( v0.x*v0.x + v1.x*v1.x ) ) Now, generate your quaternion q by: ( axis needs to be normalized ) q.x = axis.x * sinf( angle / 2 ); q.y = axis.y * sinf( angle / 2 ); q.z = axis.z * sinf( angle / 2 ); q.w = cosf( angle / 2 ) If you want a matrix representation from q, just use a standard method for converting from normalized quaternion to a rotation matrix. Btw... if you just want the angle to apply in one of your predefined axis aligned planes xy,xz or yz, just leave out the whole quaterion step. Here you can just generate a standard 2d rotation matrix using sin and cos from that angle.
2. ## the plane and axes.

Actually, you need to plugin position vectors to x,y,z in the same coordinate system your plane is defined, not directional vectors as you did with (0,0,1) and (0,1,0). But I'm not sure what you're actually looking for... maybe you can clarify a bit more, what information you actually have given, and what you need to calculate. Thanks.
3. ## 2D Coordinates from 3D View and Projection Matricies

Keep in mind, that your matrix M transforms each point into homogeneous clip space HCS, ranging from x=-w..w, y=-w..w, z=-w..w,w=z, so you might want to devide each projected coordinate additionally by w, to get your screen values in range x=-1..1,y=-1..1,z=-1..1, the normalized device coords ( NDC ). So: M*w = c (in HCS) s = c / c.w ( divide by homogeneous coordinate c.w ) To get back into world space, store your c.w per fragment/pixel/screen coord. For each screen coord: c = s * c.w w = M^-1 * c
4. ## the plane and axes.

You calculate a,b,c,d and substitute x,y,z with the values you want to check.
5. ## Creating a hud/gui with GLSL

Dynamic text is something, that you might want to prepare in your c++ code. Take a texture containing the letters of the alphabet, each letter in the texture equally spaced in width and height. Write a C++ module, that inputs a string and you basically convert each character in the string to an u,v offset in the 2D alphabet texture. For rendering this text as textured quads for example, you can just use a standard vertex and fragment shader, which has the functionality of a vertex/fragment shader pendant of a fixed function pipeline. But sure, if you like, go ahead and do something fancy with that text, e.g. like displacing the quad vertices, distorting the uv coords, or applying some function for color generation...
6. ## Opengl - linker errors

Seems, that you miss linking against glut32.lib.
7. ## Help: Navigation Meshes

When you want to do automatic navigation mesh generation from real world meshes, you might want to have a look at mesh and polygon reduction methods, while keeping the main characteristics of the mesh. Here are some links I found: http://graphics.stanford.edu/courses/cs468-10-fall/LectureSlides/08_Simplification.pdf http://webdocs.cs.ualberta.ca/~anup/Courses/604_3DTV/Presentation_files/Polygon_Simplification/luebke01developers.pdf
8. ## Edge-Face Collision detection between sphere and box

Hmm ... why do make it so complicated? When you know you have a sphere and a box colliding with each other, why don't you use the implicit representation of the sphere? ( (x-x0)^2 + (y-y0)^2 + (z-z0)^2 = r^2 ), where (x0,y0,z0) is the origin of the sphere. ... and a transformed box, where the box is either an object oriented bbox, or you just use an axis aligned bbox, and transform the sphere into your local AABB space, which might make the collision detection easier. Maybe this applies to your case, if not, I apologize... ... though finding the point(s) of intersection here is not so intuitive.
9. ## Creating a hud/gui with GLSL

Try the following steps: 1.) Switch to orthographic projection 2.) Prepare your hud geometry 3.) Apply a vertex and fragment shader to your hud geometry 4.) Draw your triangles describing or hud geometry When you choose your vertex shader, think about, how you would like to transform your vertices... e.g. just as linear transform e.g. applying the mere modelviewproj transform, or as non-linear transform, e.g. where you treat each vertex with a different offset vector ( computed in the vertex shader ). Prepare the fragment shader with values per vertex to be interpolated for each fragment. For your fragment shader, think about how you would change your fragments of your rendered hud area on screen. E.g. use interploated uv values across the triangles as input for a function for pixel color distortion ( executing inside the fragment shader ), or anything else what comes into your mind...
• Advertisement
×

## Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!