Jump to content
  • Advertisement

Joey P

  • Content Count

  • Joined

  • Last visited

Community Reputation

111 Neutral

About Joey P

  • Rank
  1. Yes, this is much preferred, do you know of any tutorials with source for doing this?
  2. Joey P

    I can't understand matrices

    The matrices convert points in world coordinates to points in screen coordinates. The projection matrix takes vertices in 3d geometry and projects them onto your 2d screen. Every frame each vertex is multiplied by the projection matrix to determine its 2d coordinates on screen. Before this however, the model view matrix is used to convert vertices from object coordinates into world coordinates relative to your view. So if you are rendering a cube for example and you define the center of the cube to be point 0,0,0 in object coordinates, and one of the corners is 1,1,-1, and then it is positioned 12 units in front of your camera, and 5 units up, the model view matrix determines the coordinate of the vertex relative to your camera position, ie what the vertex would be if your camera was 0,0,0. You can work with the model matrix and view matrix separately, which I prefer, but some versions of OpenGL (like OpenGL ES 1.0) don't allow this. In the end you'll pass only the combined MVP matrix to your shader. The glPerspective and glFrustrum methods are used to determine the view matrix for you. Use glPerspective if you want to specify a horizontal and vertical FOV. The "planes" you're talking about are near and far clipping planes, they just determine how far and close you should be able to see objects. So if you specify a far clipping plane of 100, you won't be able to see any objects more than 100 units from the camera (ever wonder why old games use fog?) The farther you set the far clipping plane, the farther away you'll be able to render objects, but if you set it too far out OpenGL will have a harder time determining which objects should be in front of eachother.
  3. I've been looking around lately for an OpenGL ES 2.0 engine, basically all I need it to do is render static meshes in open space and load .obj files, texturing, and directional and point lighting, nothing fancy. I came across one called Rajawali, it seems to do everything I need it to, and is open source so I can integrate it into my project changing what I need to. Just wondering if anyone here has used it in a serious application? If so would you recommend it? Only thing I'm worried about is, it seems to load the .obj files from disk during runtime, which can be a bit slow, instead of providing a tool to convert them to vertex arrays or VBO's before compiling...
  4. Thanks, that helped, although it's still doing some slightly goofy things... I'll just have to start stepping through code
  5. Yes, I was going to be getting to that eventually. This is still just a test app / for learning. Second: v_Color[0] = a_Color[0] * diffuse; \n" + " v_Color[1] = a_Color[1] * diffuse; \n" + " v_Color[2] = a_Color[2] * diffuse; \n" + " v_Color[3] = a_Color[3]; v_Color = a_Color*diffuse; //write this instead![/quote] But I don't want the alpha channel attenuated. If I did it that way I'd be able to see through the meshes. Third, If you are passing in the sun vector, is it changing? Try: vec3 nSunVector = normalize(vec3(1,1,1)); [/quote] The sun vector changes very slowly over time but at the moment I'm just passing in constants for testing so it isn't that... My problem is, I'm lighting the surfaces in world space before doing the rotations and transforming them, I know what I'm doing wrong, I just can't figure out how to do it right...
  6. So I have some meshes that do full rotations every say 10 seconds by rotating the model matrix an amount based on the modulus of the system clock... beginner stuff. Great, except I have directional light coming from the sun. So I pass in sunVector to my shader, along with the vertex position and normal vector, and calculate the diffuse. Then the vertex gets MVP matrixified After the color was calculated. So it looks like the directional sunlight is rotating along with the mesh - the same side is always lit. Vertex shader: [source lang="java"]private final String vertexShaderCode = "uniform mat4 u_MVPMatrix; \n" + "uniform mat4 u_MVMatrix; \n" + "uniform vec3 u_SunVector; \n" + "attribute vec4 a_Position; \n" + "attribute vec4 a_Color; \n" + "attribute vec3 a_Normal; \n" + "varying vec4 v_Color; \n" + "void main() \n" + "{ \n" + " vec3 modelViewNormal = vec3(u_MVMatrix * vec4(a_Normal, 0.0)); \n" // Normalize sun vector + " vec3 nSunVector = normalize(u_SunVector); \n" // Lambert factor + " float diffuse = max(dot(a_Normal, nSunVector), 0.0); \n" + " v_Color[0] = a_Color[0] * diffuse; \n" + " v_Color[1] = a_Color[1] * diffuse; \n" + " v_Color[2] = a_Color[2] * diffuse; \n" + " v_Color[3] = a_Color[3]; \n" . + " gl_Position = u_MVPMatrix \n" + " * a_Position; \n" + "} \n"; [/source] I must be missing a matrix somewhere?
  7. Oh snap, you are right, I am passing in my direction vector into the values of lookX, lookY, and lookZ. And I'm passing in my position vector to the values eyeX, eyeY and eyeZ. So I just changed it to [source lang="java"]Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, eyeX + lookX, eyeY + lookY, eyeZ + lookZ, upX, upY, upZ);[/source] And it works now
  8. Hi all, First post here.In my Renderer class I have these vars: [source lang="java"] // Direction of camera private float lookX = 0.0f; private float lookY = 1.0f; private float lookZ = 0.0f; // Camera tilt private float upX = 0.0f; private float upY = 0.0f; private float upZ = 1.0f; // Position of camera private float eyeX = 0.0f; private float eyeY = 0.0f; private float eyeZ = 0.0f;[/source] Then to set the view matrix I use: [source lang="java"] Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);[/source] This all works perfectly well if eyeX, eyeY, and eyeZ are all set to 0. I can change the look vector and up vector when I want to rotate the camera position around and view the world. Works beautiful. The problem is once I change the eyeX, eyeY, and eyeZ to some other arbitrary values so I can view my meshes from some place other than the origin, the rotations no longer work as expected. When I rotate the camera, all the meshes stay in view in front of me and just follow me. What is it I need to do differently? Thanks for any insights.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!