• Content count

  • Joined

  • Last visited

Community Reputation

200 Neutral

About jenny_wui

  • Rank
  1. It seems to me some thing goes with the version. OpenGL 4.5 allows vertex shader output to be bypassed by the geometry shader (even when geometry shader is linked) and fragment shader can work with directly from the geometry shader though no output from geometry shader to fragment shader is made. But with OpenGL 4.3 it shows linking error as I mentioned above. Any way, I'd like to hear from others regarding this whether I'm right or not. Thank you!
  2. I've encountered some unusual problem. Geometry shader is not set to NULL. In one platform when I run the program, it shows the following linking error:   Linking program The fragment shader uses varying Position, but previous shader does not write to it. The fragment shader uses varying Normal, but previous shader does not write to it.     In another platform it works fine, without showing any error.   I'm very much confused.
  3. I think I could not make you understand properly. I'm using geometry shader and it is linked to the program, but can the data  be sent from the vertex shader to the fragment shader by-passing the geometry shader? Is it any way possible?    
  4. I encountered some problem while using geometry shader. Usually input from the vertex shader should be passed via geometry shader (if used) to the fragment shader. But is it possible that geometry shader (when used in a program) can be bypassed and output from the vertex shader can be directly sent to fragment shader bypassing the geometry shader?  Any clarification will be highly appreciated. Thanks in advance!
  5. Thanks for the suggestion and sorry for my poor knowledge. Actually I again want to write down some observation. As I said I have an opencv matrix. I would like to convert it to OpenGL matrix. As opencv is row-major, in opengl, I need to store in column-major order. I did that. Next, Opencv y and z axis need to be reversed too. In order to get back the proper-oriented matrix in OpenGL, I multiplied the matrix with the following unit matrix: |1 0 0 0| |0 -1 0 0| |0 0 -1 0| |0 0 0 1| I get the matrix with proper orientation ( I assume), but still with y and z axis vectors reversed. I know if I just inverse the sign of y and z-axis vectors, it will give me the desired result. My question, what is the standard step of converting an opencv matrix to opengl matrix. If anyone has done similar work, please let me know.
  6. Thank you. I am trying to rotate an object by multiplying with a rotation matrix, but can't find it get right. That's why I was asking whether there is any difference getting transformation from basis vectors. Actually I have opencv rotation matrix which I need to use for my OpenGL programming.I swapped the rows for columns to get rotation matrix to be used for OpenGL. Now I find the rotation around x axis is inversed. How can I make it correct? I know it is possible to make it correct by manipulating the basis vectors, but I need to know where I am wrong. Please let me know. How to get the correct rotation matrix from OpenCV to OpenGL? I have posted this topic before but without any clarification. Please shed some light if you have answer.
  7. Hello, I would like to know is it possible to get the transformation matrix from a given x, y and z-axis basis vectors. Thanks
  8. Thanks for the reply. I understand you are suggesting to decompose the matrix and then negate the corresponding rotation angle and again build the matrix. Is there any way to get that by multiplying the unit reflection matrix? Thanks again in advance.
  9. Suppose, I have a given matrix. I only need to inverse the rotation around z-axis and the rotation around y-axis keeping the rotation around x-axis intact (without any change). I am a bit confused about what changes in unit matrix needed to be done so that if I multiply the given rotation matrix by that unit matrix, I can get the needed transformation.Could anyone give some clarification. Thanks in advance.
  10. Some question about transformation

    Thank you very much for the reply Crossbones. I tried with changing the eye position calculated from rotation vector but don't understand why it is not working. I would like to use OpenGL. Could you provide any related examples? Also could be a more elaborate on how to change gluLookAt function? Thanks again.
  11. Some question about transformation

    Thank you very much for the reply. Could you be a bit clearer, if possible with example? Do I really need to switch to glortho to accomplish this? I would like to use perspective projection. I think my view volume will be rotated in the latter case. I saw Nate Robbin's tutorial but a confused how to make it work.
  12. Some question about transformation

    Hi JohnnyCode, I have worked with only changing eye position calculating from new look at vector and used it in gluLookAt function, but the plane does not look square as it should be, it still looks trapezoidal like Figure (b). That's why I'm thinking whether I need to change other prameters in gluLookAt function. Suppose if the plane is rotated parallel to the xz plane, then still it should look like the picture inFigure (a). Then do you think only change in eye position will work, I don't need to change the up vector? Please provide some suggestions. Thanks in advance.
  13. Some question about transformation

    Thank you very much for the reply. I think I have calculated the new eye point in my posting as you suggested. But how to find the new up vector? Please provide me some suggestion. Thanks
  14. Hello, I think my previous postings were a bit confusing. I just wanted to clarify that in this posting. I have a plane and attached figure shows two positions of the plane, (a) no rotation and (b) with rotation where rotation axis is defined by ax + by + cz. Now, the plane will rotate, but it will always look without rotation as shown in figure (a). I think I need to play around with my glutReshape function which looks like follows: /*******************************/ void glutReshape(int width, int height) { width = glutGet(GLUT_WINDOW_WIDTH); height = glutGet(GLUT_WINDOW_HEIGHT); if (width && height) { glViewport(0, 0, width, height); nearDist = 150.0 / tan((kFovY / 2.0) * kPI / 180.0); farDist = nearDist + 100.0; aspect = (double) width / height; glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(kFovY, aspect, 0.1, farDist+20); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); gluLookAt( 0.0, 0.0, (nearDist + 5.0), // camera/ eye 0.0, 0.0, 0.0, // center 0.0, 1.0, 0.0); // up vector } } /*******************************/ But I fail to correctly manipulate it. When rotation axis will move from (0, 0, 1) to (ax + by + bz), the eye/ camera will be changed from ( 0, 0, nearDist +5) to {(nearDist+5)*ax, (nearDist+ 5)*ay, (nearDist + 5)*az}. Is it right? But what about the up vector? How will it change for this rotation? How can I calculate new up vector? Also what other parameters in my glutReshape function needs to be changed? Please provide me some suggestion so that I can fix this problem. Thanks in advance.
  15. Hello, this is some continuation of my previous posted topic on transformation. I have attached two pictures of rotation of different axes, i.e. X, Y and Z. Z rotation is the projection on XY plane for two different Z rotation. It shows how the length of the axis varies depending on Z rotation. Now my question is, from this information is it possible to figure out rotation of Z axis. Here as rotation is on XY plane, the length of X and Y axis does not vary, but that of Z-axis varies. Thanks in advance.