Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 29 Apr 2002
Offline Last Active Today, 02:50 PM

#5224361 OpenGL View Matrix that rotates the camera

Posted by NumberXaero on 19 April 2015 - 03:01 PM

What part are you hung up on? what are you trying? There are a few ways to get a view matrix depending on your setup.

If you treat a camera as a regular object like any other, the view matrix is simply the inverse of that objects world matrix. Controlling it with pitch yaw roll (p-y-r), first thing youll want to do is track these values and wrap them at 360.

Once you have p-y-r angles, make rotations out of them (using quaternions, matrices) around each axis from the axis|angle, roll around z, yaw around y etc. Concatenate them, world = roll * (yaw * pitch), invert and convert to matrix if not already, and load it as the view matrix.

Other options, track the y-p-r values, use the angles to rotate the x-y-z axis, use the resulting vectors to create the vectors for the lookat function, at = pos + zaxis, up = yaxis. You could also use the final matrix from the first option to rotate the x-y-z axis, and build the lookat vectors from the results.

#5223061 Skeletal animation with COLLADA

Posted by NumberXaero on 13 April 2015 - 07:35 PM

Im going to refer to bones as joints, where a joint is a local xform (transform, xf), which when combined with its parent joints world xform, produces the joints world xform, Where a transform is often a matrix, or individual translation, scale, rotations that make up the xform.

Youll have to sort out the matrix conventions based on what you do in your code and what you get from the file. I would get it working on the cpu and then moving the skinning to the gpu when you have it working.

joint_local_xf = given from file, an offset from the parent joints xform (no parent, local = world)
joint_world_xf = parent_joint_world_xf * joint_local_xf

So you have........

original_vertices = model consisting of a list of vertices (and indices)
bind_shape_matrix = from creation package, when the model was created and bound to skeleton
inv_bind_pose_matrix[] = array of matrices, 1 for each joint (assuming, or identity)
weight_list[] = list of weights associated with a joint, how much the joint affects the vertex
animation_key_times[] = array of frame times, when you land between two key times, you interpolate using some method
animation_key_data = lists of matrices that refer to what a joints transform should be at a given time, sometimes stored as matrices, often easier to work with as separate translation, scale and rotation

After the model is loaded, you have the original vertices.
The bind_shape_matrix is applied every frame and doesnt change, so you can apply it once at load, save the result and use it for skinning.
    bind_shape_vertex[i] = bind_shape_matrix * original_vertex[i]
Each joint (bone) has a xform from the scene, this is probably the pose for frame zero (first frame). You can think of these local xforms as the local xforms to use for the animation if the joint didnt move during the animation.
Ex. An arm up in the air, but only the wrist joint moving left to right, the elbow and shoulder would be up in the air at the start, but wouldnt animate, only the wrist joint would change.

During the scene loop

1) advance the animation time, clamp or wrap time to stop or repeat, scale to speed up down, etc
2) use the current animation time to interpolate between two animation key frames local transforms producing a new local transform for a given joint
this is where data as vec3s and quaternions for rotations might be easier to work with, rather then matrices
   interpolated_anim_data_xf = lerp(anim_data_xf[n], anim_data_xf[n+1])
3) set this interpolated local xform from 2) as current for the joint (replacing the one from the frame 0 pose from the scene, or not maybe if the joint has no animation data)
   joint_local_xf = interpolated_anim_data_xf
4) update the joints world xform using its new interpolated local transform, root -> leaf
   joint_world_xf = parent_joint_world_xf * (interpolated_)joint_local_xf
5) create a skinning matrix for each joint from the new world matrix
   joint_skin_matrix = joint_world_matrix * joint_inv_bind_pose_matrix
6) do the skinning using the bind_shape_vertices[] and each joint_skin_matrix

ForEach bind_shape_vertex[] v
   skinned_vertex_pos = vec3(0.0, 0.0, 0.0)
   ForEach joint j affecting vertex v
      skinned_vertex_pos += (joint_skin_matrix[j] * bind_shape_vertex[v]) * joint_weight[]          // this is done positions and normals, remember to normalize if things look weird
   skinned_vertex[v] = skinned_vertex_pos

7) Draw the skinned_vertex[] list

Thats the general idea of how the data is used

#5222669 How to create a circular orbit and an angry bird space like orbit ?

Posted by NumberXaero on 11 April 2015 - 05:36 PM

Well if youve got it attracting to the planet you just need to go around it now. Simplest way would be to just apply a force to move perpendicular to the vector from the object to the planet.


vec2 dirToPlant = normalize(planetPos - objectPos);

side = vec2(-dirToPlant.y, dirToPlant.x);

object.AddForce(side * someForce);            // <-- recalc side and do this each frame while inside the attracting region

#5221530 Issues with IDE Linker? Why is this happening, and how do i fix it?

Posted by NumberXaero on 05 April 2015 - 04:56 PM

Maybe the include guards on lines 21, 22, 46, 47 and the #endif at the end of the file are messing things up, maybe they shouldnt be there at all after you moved things around?

Also where is this Object class?

#5221378 Resizing high resolution textures without software?

Posted by NumberXaero on 04 April 2015 - 01:19 PM

Image Magick cmd line tool?

#5217533 Mantle programming guide and white paper released

Posted by NumberXaero on 18 March 2015 - 08:48 PM




From whats being said about the new apis coming, this should give an early look at whats to come

#5214147 Vulkan is Next-Gen OpenGL

Posted by NumberXaero on 03 March 2015 - 03:20 AM





#5205869 Quick texture array question

Posted by NumberXaero on 21 January 2015 - 04:19 PM

Are you setting tex parameter GL_TEXTURE_COMPARE_MODE and GL_TEXTURE_COMPARE_FUNC for the shadow map texture when using it with sampler2DShadow?

What GL context and GLSL version are you using?

#5205741 Quick texture array question

Posted by NumberXaero on 21 January 2015 - 04:01 AM

If this is what youre doing, as jmakitalo posted


As NumberXaero suggested, you bind the array to one location, say 1:

glBindTexture(GL_TEXTURE_2D_ARRAY, texturearray);
glUniform1i(location, 1);

In shaders you use

uniform sampler2DArray arrayTex;

/* ... */

void main()
 float index = 0.0;
 vec4 color = texture(arrayTex, vec3(texCoord.xy, index));
 /* ... */

The third component of the second argument to texture is the array index. Arrays are great for overcoming texture image unit limitations, although the layers will need to have the same properties



then thats correct. You generate an id and it goes to bind, you set an activate texture unit before bind, and pass that active index to the sampler location. Assuming

"texturearray" isnt a variable being changed between the bind calls, and assuming you are using three arrays containing the textures grouped by use

//shadow map
glActiveTexture(GL_TEXTURE0);                  // unit 0
glBindTexture(GL_TEXTURE_2D, depthTexture);    // GenTexture id
glUniform1i(shadowMapSamplerUniformLocation, 0);     // sampler (index)

//diffuse image
glActiveTexture(GL_TEXTURE1);                               // unit 1
glBindTexture(GL_TEXTURE_2D_ARRAY, texturearrayDiffuse);    // GenTexture id, bound to 1
glUniform1i(diffuseArraySamplerUniformLocation, 1);        // sampler (index)
glUniform1i(LayerNumID1, imageIndex.x);       // the diffuse you want to access I take it?

//normal image
glActiveTexture(GL_TEXTURE2);                               // unit 2
glBindTexture(GL_TEXTURE_2D_ARRAY, texturearrayNormal);    // GenTexture id, bound to 2
glUniform1i(normalArraySamplerUniformLocation, 2);        // sampler (index)
glUniform1i(LayerNumID2, imageIndex.y);       // the normal you want to access I take it?

// spec......

if "texturearray" is a big list of diffuse/normal/spec textures altogether then the shader would have a single, uniform sampler2DArray, rather then 3, and setup

glActiveTexture(GL_TEXTURE1); // unit 1
glBindTexture(GL_TEXTURE_2D_ARRAY, texturearray); // GenTexture id, bound to 1, the only array
glUniform1i(arraySamplerUniformLocation, 1); // sampler (index)
glUniform1i(LayerNumID1, imageIndex.x); // the diffuse you want to access?
glUniform1i(LayerNumID2, imageIndex.y); // the normal you want to access?
glUniform1i(LayerNumID3, imageIndex.z); // the spec you want to access?

Short version, the sampler uniform needs the texture unit index.

#5205710 Quick texture array question

Posted by NumberXaero on 20 January 2015 - 11:58 PM

The general idea is certainly possible, buts its hard to tell from pseudo-code, youre binding "texturearray" to three different texture units, and using three different samplers in the shader? plus passing a 0 to the shader for the shadow uniform, not sure if you are actually doing that or this is just a mistake in the example code.

But in general you should be able to set active texture, bind, set active texture + 1, bind, etc.

#5205673 opengl shadow mapping

Posted by NumberXaero on 20 January 2015 - 07:49 PM

During the second pass the depths are created from projecting the vertices into lights pov, then the comparision is made.

So say theres some point A, and the light can see it (so the depth lands in the shadow map). Behind it theres another point B, and the camera can see it, but the light cant (say the triangle A belongs to is blocking B from being seen in the lights pov).

When B is projected into the lights pov in the second pass, it generates a second depth, larger (further away) then the one stored by A, B must be in shadow, because its behind A, B depth is larger for the lights pov (say the triangle A belongs to casts a shadow on B).

Without getting too complicated and drawing pictures thats the best I can do.

#5203844 I need help fixing basic FreeType2 Errors

Posted by NumberXaero on 13 January 2015 - 12:24 AM



#include FT_FREETYPE_H


FT_FREETYPE_H is defined already in ftheader.h as #define FT_FREETYPE_H  <freetype.h> with the comment


   * @macro:
   * @description:
   *   A macro used in #include statements to name the file containing the
   *   base FreeType~2 API.


others would be


#include FT_GLYPH_H
#include FT_CFF_DRIVER_H
#include FT_MODULE_H
#include FT_STROKER_H

#5200027 Logic question

Posted by NumberXaero on 25 December 2014 - 10:34 PM

You could maybe do something with bits, Im not sure what it is youre doing, or how the data is being fetched, the resulting values would vary a bit, but they would be unique.


unsigned int but1 = ..., but2= ..., but3= ...;


// shift button into place and OR them together

unsigned int key = (but1 << 16) | (but2 << 8) | but3;        // 00000000 00000000 00000000 00000000 = 00000000 button1 button2 button3


A B C = 66051 = 00000000 00000001 00000010 00000011

B B B = 131586 = 00000000 00000010 00000010 00000010

B B A = 131585 = 00000000 00000010 00000010 00000001

A A C = 65795 = 00000000 00000001 00000001 00000011

#5199378 Blending two textures on a model results in black surface

Posted by NumberXaero on 21 December 2014 - 02:40 AM

My guess would be seeing as this is a book sample, and its the teapot model, there is no second texture coordinate set for the model, therefore the author is simply using texture coordinate set 1 (Txr1) as texture coordinate set 2 (Out.Txr1 = Txr1; Out.Txr2 = Txr1;) for sampling the second texture.

Many models only have 1 texture coordinate set, any more then that means they were probably created for a specific purpose, a light mapping texture being applied over a base texture for example. Where the light map would need different texture coordinates then those being used for the base texture.

Its not uncommon to use 1 texture coordinate set to sample many different textures, diffuse, normal, specular, emissive, etc.



float4 ps_main(VS_OUTPUT vo) : COLOR0
return float4(vo.Txr2.x, vo.Txr2.y, 0.0, 0.0);


you will probably always get black, because the teapot model file probably doesnt have a second texture coordinate set, where


float4 ps_main(VS_OUTPUT vo) : COLOR0
return float4(vo.Txr1.x, vo.Txr1.y, 0.0, 0.0);


will probably give you colors black, red, green and yellow, because the teapot model that was loaded had a single valid texture coordinate

#5187875 [PhysX] Allow players to slide down slopes / Controller contact callbacks

Posted by NumberXaero on 18 October 2014 - 01:48 PM

The character controller description has a reportCallback member that can be set to an implementation of PxUserControllerHitReport. PxUserControllerHitReport has virtual functions


virtual void onShapeHit(const PxControllerShapeHit& hit);        
virtual void onControllerHit(const PxControllersHit& hit);
virtual void onObstacleHit(const PxControllerObstacleHit& hit);


the PxController*Hit objects all derive from PxControllerHit which has members worldPos, worldNormal, dir, length, etc. Using these members and few dot products and vector math you should be able to slide the character downward according to your own gravity.


You didnt mention physx version, this is assuming some version >= 3.0 or so.