• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Jens Eckervogt
      Hello guys, 
       
      Please tell me! 
      How do I know? Why does wavefront not show for me?
      I already checked I have non errors yet.
      using OpenTK; using System; using System.Collections.Generic; using System.IO; namespace Tutorial_08.net.sourceskyboxer { public class WaveFrontLoader { private static List<Vector3> vertices; private static List<Vector2> textures; private static List<Vector3> normals; private static List<int> indices; private static float[] verticesArray; private static float[] normalsArray; private static float[] texturesArray; private static int[] indicesArray; private static string[] lines; public static RawModel LoadObjModel(string filename, Loader loader) { if (!File.Exists("Contents/" + filename + ".obj")) { throw new FileNotFoundException("Error: wavefront file doesn't exist path: " + filename + ".png"); } vertices = new List<Vector3>(); textures = new List<Vector2>(); normals = new List<Vector3>(); indices = new List<int>(); lines = File.ReadAllLines("Contents/" + filename + ".obj"); try { foreach (string line in lines) { if (line == "" || line.StartsWith("#")) continue; string[] token = line.Split(' '); switch(token[0]) { case ("o"): string o = token[1]; break; case "v": Vector3 vertex = new Vector3(float.Parse(token[1]), float.Parse(token[2]), float.Parse(token[3])); vertices.Add(vertex); break; case "vn": Vector3 normal = new Vector3(float.Parse(token[1]), float.Parse(token[2]), float.Parse(token[3])); normals.Add(normal); break; case "vt": Vector2 texture = new Vector2(float.Parse(token[1]), float.Parse(token[2])); textures.Add(texture); break; case "f": texturesArray = new float[vertices.Count * 2]; normalsArray = new float[vertices.Count * 3]; verticesArray = new float[vertices.Count * 3]; indicesArray = new int[indices.Count]; int vertexPointer = 0; foreach (Vector3 vex in vertices) { verticesArray[vertexPointer++] = vex.X; verticesArray[vertexPointer++] = vex.Y; verticesArray[vertexPointer++] = vex.Z; } for (int i = 0; i < indices.Count; i++) { indicesArray[i] = indices[i]; } break; } } } catch (FileNotFoundException f) { throw new FileNotFoundException($"OBJ file not found: {f.FileName}", f); } catch (ArgumentException ae) { throw new ArgumentException("OBJ file is damaged", ae); } return loader.loadToVAO(verticesArray, texturesArray, indicesArray); } } } And It have tried other method but it can't show for me.  I am mad now. Because any OpenTK developers won't help me.
      Please help me how do I fix.

      And my download (mega.nz) should it is original but I tried no success...
      - Add blend source and png file here I have tried tried,.....  
       
      PS: Why is our community not active? I wait very longer. Stop to lie me!
      Thanks !
    • By codelyoko373
      I wasn't sure if this would be the right place for a topic like this so sorry if it isn't.
      I'm currently working on a project for Uni using FreeGLUT to make a simple solar system simulation. I've got to the point where I've implemented all the planets and have used a Scene Graph to link them all together. The issue I'm having with now though is basically the planets and moons orbit correctly at their own orbit speeds.
      I'm not really experienced with using matrices for stuff like this so It's likely why I can't figure out how exactly to get it working. This is where I'm applying the transformation matrices, as well as pushing and popping them. This is within the Render function that every planet including the sun and moons will have and run.
      if (tag != "Sun") { glRotatef(orbitAngle, orbitRotation.X, orbitRotation.Y, orbitRotation.Z); } glPushMatrix(); glTranslatef(position.X, position.Y, position.Z); glRotatef(rotationAngle, rotation.X, rotation.Y, rotation.Z); glScalef(scale.X, scale.Y, scale.Z); glDrawElements(GL_TRIANGLES, mesh->indiceCount, GL_UNSIGNED_SHORT, mesh->indices); if (tag != "Sun") { glPopMatrix(); } The "If(tag != "Sun")" parts are my attempts are getting the planets to orbit correctly though it likely isn't the way I'm meant to be doing it. So I was wondering if someone would be able to help me? As I really don't have an idea on what I would do to get it working. Using the if statement is truthfully the closest I've got to it working but there are still weird effects like the planets orbiting faster then they should depending on the number of planets actually be updated/rendered.
    • By Jens Eckervogt
      Hello everyone, 
      I have problem with texture
      using System; using OpenTK; using OpenTK.Input; using OpenTK.Graphics; using OpenTK.Graphics.OpenGL4; using System.Drawing; using System.Reflection; namespace Tutorial_05 { class Game : GameWindow { private static int WIDTH = 1200; private static int HEIGHT = 720; private static KeyboardState keyState; private int vaoID; private int vboID; private int iboID; private Vector3[] vertices = { new Vector3(-0.5f, 0.5f, 0.0f), // V0 new Vector3(-0.5f, -0.5f, 0.0f), // V1 new Vector3(0.5f, -0.5f, 0.0f), // V2 new Vector3(0.5f, 0.5f, 0.0f) // V3 }; private Vector2[] texcoords = { new Vector2(0, 0), new Vector2(0, 1), new Vector2(1, 1), new Vector2(1, 0) }; private int[] indices = { 0, 1, 3, 3, 1, 2 }; private string vertsrc = @"#version 450 core in vec3 position; in vec2 textureCoords; out vec2 pass_textureCoords; void main(void) { gl_Position = vec4(position, 1.0); pass_textureCoords = textureCoords; }"; private string fragsrc = @"#version 450 core in vec2 pass_textureCoords; out vec4 out_color; uniform sampler2D textureSampler; void main(void) { out_color = texture(textureSampler, pass_textureCoords); }"; private int programID; private int vertexShaderID; private int fragmentShaderID; private int textureID; private Bitmap texsrc; public Game() : base(WIDTH, HEIGHT, GraphicsMode.Default, "Tutorial 05 - Texturing", GameWindowFlags.Default, DisplayDevice.Default, 4, 5, GraphicsContextFlags.Default) { } protected override void OnLoad(EventArgs e) { base.OnLoad(e); CursorVisible = true; GL.GenVertexArrays(1, out vaoID); GL.BindVertexArray(vaoID); GL.GenBuffers(1, out vboID); GL.BindBuffer(BufferTarget.ArrayBuffer, vboID); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(vertices.Length * Vector3.SizeInBytes), vertices, BufferUsageHint.StaticDraw); GL.GenBuffers(1, out iboID); GL.BindBuffer(BufferTarget.ElementArrayBuffer, iboID); GL.BufferData(BufferTarget.ElementArrayBuffer, (IntPtr)(indices.Length * sizeof(int)), indices, BufferUsageHint.StaticDraw); vertexShaderID = GL.CreateShader(ShaderType.VertexShader); GL.ShaderSource(vertexShaderID, vertsrc); GL.CompileShader(vertexShaderID); fragmentShaderID = GL.CreateShader(ShaderType.FragmentShader); GL.ShaderSource(fragmentShaderID, fragsrc); GL.CompileShader(fragmentShaderID); programID = GL.CreateProgram(); GL.AttachShader(programID, vertexShaderID); GL.AttachShader(programID, fragmentShaderID); GL.LinkProgram(programID); // Loading texture from embedded resource texsrc = new Bitmap(Assembly.GetEntryAssembly().GetManifestResourceStream("Tutorial_05.example.png")); textureID = GL.GenTexture(); GL.BindTexture(TextureTarget.Texture2D, textureID); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)All.Linear); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)All.Linear); GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, texsrc.Width, texsrc.Height, 0, PixelFormat.Bgra, PixelType.UnsignedByte, IntPtr.Zero); System.Drawing.Imaging.BitmapData bitmap_data = texsrc.LockBits(new Rectangle(0, 0, texsrc.Width, texsrc.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppRgb); GL.TexSubImage2D(TextureTarget.Texture2D, 0, 0, 0, texsrc.Width, texsrc.Height, PixelFormat.Bgra, PixelType.UnsignedByte, bitmap_data.Scan0); texsrc.UnlockBits(bitmap_data); GL.Enable(EnableCap.Texture2D); GL.BufferData(BufferTarget.TextureBuffer, (IntPtr)(texcoords.Length * Vector2.SizeInBytes), texcoords, BufferUsageHint.StaticDraw); GL.BindAttribLocation(programID, 0, "position"); GL.BindAttribLocation(programID, 1, "textureCoords"); } protected override void OnResize(EventArgs e) { base.OnResize(e); GL.Viewport(0, 0, ClientRectangle.Width, ClientRectangle.Height); } protected override void OnUpdateFrame(FrameEventArgs e) { base.OnUpdateFrame(e); keyState = Keyboard.GetState(); if (keyState.IsKeyDown(Key.Escape)) { Exit(); } } protected override void OnRenderFrame(FrameEventArgs e) { base.OnRenderFrame(e); // Prepare for background GL.Clear(ClearBufferMask.ColorBufferBit); GL.ClearColor(Color4.Red); // Draw traingles GL.EnableVertexAttribArray(0); GL.EnableVertexAttribArray(1); GL.BindVertexArray(vaoID); GL.UseProgram(programID); GL.BindBuffer(BufferTarget.ArrayBuffer, vboID); GL.VertexAttribPointer(0, 3, VertexAttribPointerType.Float, false, 0, IntPtr.Zero); GL.ActiveTexture(TextureUnit.Texture0); GL.BindTexture(TextureTarget.Texture3D, textureID); GL.BindBuffer(BufferTarget.ElementArrayBuffer, iboID); GL.DrawElements(BeginMode.Triangles, indices.Length, DrawElementsType.UnsignedInt, 0); GL.DisableVertexAttribArray(0); GL.DisableVertexAttribArray(1); SwapBuffers(); } protected override void OnClosed(EventArgs e) { base.OnClosed(e); GL.DeleteVertexArray(vaoID); GL.DeleteBuffer(vboID); } } } I can not remember where do I add GL.Uniform2();
    • By Jens Eckervogt
      Hello everyone
      For @80bserver8 nice job - I have found Google search. How did you port from Javascript WebGL to C# OpenTK.?
      I have been searched Google but it shows f***ing Unity 3D. I really want know how do I understand I want start with OpenTK But I want know where is porting of Javascript and C#?
       
      Thanks!
    • By mike44
      Hi
      I draw in a OpenGL framebuffer. All is fine but it eats FPS (frames per second), hence I wonder if I could execute the framebuffer drawing only every 5-10th loop or so?
      Many thanks
       
  • Advertisement
  • Advertisement
Sign in to follow this  

OpenGL FPS Camera

This topic is 1653 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I recently went back to look at some old OpenGL(3.x+) programs I did and checked my camera class. In my camera class I have a function which calculates the view matrix. The calculations are based on OpenGL's gluLookAt function

Here's what the class looks like (I've omitted several identical and irrelevant functions for less code):

#include "camera.h"

//Create view matrix
glm::mat4 Camera::createViewMatrix()
{
    glm::vec3 lookDirection = glm::normalize(cameraTargetPos); //Look direction of camera
    static glm::vec3 upDirection = glm::normalize(glm::vec3(0.0f, 1.0f, 0.0f)); //Up direction of camera in the world, aligned with world's y-axis.
    glm::vec3 rightDirection = glm::normalize(glm::cross(lookDirection, upDirection)); //Right direction of camera
    glm::vec3 perpUpDirection = glm::cross(rightDirection, lookDirection); //Re-calculate up direction, basis vectors may not be orthonormal

    static glm::mat4 viewMatrixAxes;

    //Create view matrix, [0] is first column, [1] is second etc.
    viewMatrixAxes[0] = glm::vec4(rightDirection, 0.0f);
    viewMatrixAxes[1] = glm::vec4(perpUpDirection, 0.0f);
    viewMatrixAxes[2] = glm::vec4(-lookDirection, 0.0f);

    viewMatrixAxes = glm::transpose(viewMatrixAxes); //Transpose for inverse

    static glm::mat4 camPosTranslation; //Translate to position of camera

    camPosTranslation[3] = glm::vec4(-cameraPosition, 1.0f);

    viewMatrix = viewMatrixAxes*camPosTranslation;

    return viewMatrix;
}

void Camera::yawRotation(GLfloat radian)
{
    glm::vec3 spherCameraTarget(cartesianToSpherical(cameraTargetPos)); //Convert camera target to spherical coordinates
    spherCameraTarget.y += radian; //Add radian units to camera target (in spherical coordinates)
    cameraTargetPos = sphericalToCartesian(spherCameraTarget); //Convert camera target to cartesian coordinates
}

void Camera::pitchRotation(GLfloat radian)
{
    glm::vec3 spherCameraTarget(cartesianToSpherical(cameraTargetPos)); //Convert camera target to spherical coordinates

    spherCameraTarget.z += radian; //Add radian units to camera target (in spherical coordinates)
    spherCameraTarget.z = glm::clamp(spherCameraTarget.z, 0.0001f, PI); //Clamp the pitch rotation between [0 PI] radians

   cameraTargetPos = sphericalToCartesian(spherCameraTarget); //Convert camera target to cartesian coordinates
}

void Camera::moveBackward(GLfloat moveSpeed)
{
    cameraPosition.x += (viewMatrix[0][2]*moveSpeed);
    cameraPosition.y += (viewMatrix[1][2]*moveSpeed);
    cameraPosition.z += (viewMatrix[2][2]*moveSpeed);
}

//Note: Change from standard convention. Z-axis is Y-axis, Y-axis is Z-axis and Z-axis should be negative
//Change this in formula below
glm::vec3 Camera::cartesianToSpherical(glm::vec3 cartesianCoordinate)
{
    GLfloat r = (sqrt(pow(cartesianCoordinate.x, 2) + pow(cartesianCoordinate.y, 2) + pow(cartesianCoordinate.z, 2)));

    GLfloat theta = atan2(-cartesianCoordinate.z, cartesianCoordinate.x);
    GLfloat phi = acos(cartesianCoordinate.y/r);

    glm::vec3 sphericalCoordinate(r, theta, phi);

    return sphericalCoordinate;
}

//Note: See notes for cartesianToSpherical() function
glm::vec3 Camera::sphericalToCartesian(glm::vec3 sphericalCoordinate)
{
    GLfloat theta = sphericalCoordinate.y;
    GLfloat phi = sphericalCoordinate.z;

    glm::vec3 cartesianCoordinate(cos(theta)*sin(phi), cos(phi), -sin(theta)*sin(phi));

    return cartesianCoordinate * sphericalCoordinate.x;
}

This worked as expected. I could strafe left/right, up/down, forward/backwards and I could rotate around my up vector 360 degrees and rotate around my right vector +-90 degrees. However I realized that according to the OpenGL documentation I'm calculating my view matrix wrong. If you look the first line of code in the createViewMatrix() function then you can see that my look direction is only a normalized camera look-at point. According to the documentation it should be (cameraTargetPos-cameraPosition). When I corrected it the entire camera system broke down, both the strafing and rotation. I fixed the strafing so it works now as it should, here's the new code (note that the view matrix is calculated correctly now and that all strafe methods update the camera look-at point):

//Create view matrix
glm::mat4 Camera::createViewMatrix()
{
    glm::vec3 lookDirection = glm::normalize(cameraTargetPos-cameraPosition); //Look direction of camera
    static glm::vec3 upDirection = glm::normalize(glm::vec3(0.0f, 1.0f, 0.0f)); //Up direction of camera in the world, aligned with world's y-axis.
    glm::vec3 rightDirection = glm::normalize(glm::cross(lookDirection, upDirection)); //Right direction of camera
    glm::vec3 perpUpDirection = glm::cross(rightDirection, lookDirection); //Re-calculate up direction, basis vectors may not be orthonormal

    static glm::mat4 viewMatrixAxes;

    //Create view matrix, [0] is first column, [1] is second etc.
    viewMatrixAxes[0] = glm::vec4(rightDirection, 0.0f);
    viewMatrixAxes[1] = glm::vec4(perpUpDirection, 0.0f);
    viewMatrixAxes[2] = glm::vec4(-lookDirection, 0.0f);

    viewMatrixAxes = glm::transpose(viewMatrixAxes); //Transpose for inverse

    static glm::mat4 camPosTranslation; //Translate to position of camera

    camPosTranslation[3] = glm::vec4(-cameraPosition, 1.0f);

    viewMatrix = viewMatrixAxes*camPosTranslation;

    return viewMatrix;
}

void Camera::moveBackward(GLfloat moveSpeed)
{
    cameraTargetPos.x += (viewMatrix[0][2]*moveSpeed);
    cameraTargetPos.y += (viewMatrix[1][2]*moveSpeed);
    cameraTargetPos.z += (viewMatrix[2][2]*moveSpeed);

    cameraPosition.x += (viewMatrix[0][2]*moveSpeed);
    cameraPosition.y += (viewMatrix[1][2]*moveSpeed);
    cameraPosition.z += (viewMatrix[2][2]*moveSpeed);
}

The rest of the functions in the old code (cartesianToSpherical(), yawRotation etc.) remain unchanged.

 

The rotation still remains broken. If I'm close enough to my object it works as it should. It's hard to describe how it's broken. If, for example I move my camera far back enough that the camera look-at point  has a positive value (it starts with a negative value a first) one of the spherical coordinates (theta) ends up being negative so when I rotate around the up vector to the left I end up rotating right instead, not only that but I never complete a whole revolution. It's as if I rotate CW 45 degrees and then CCW 45 degrees, it just keeps going on like that. There's some other weird behaviour that goes on as well.

 

I'm quite certain it has to do with how I go back and forth between cartesian/spherical coordinates and the formulas I use but I'm lost in how to solve it. That I got it to work properly with wrong code is a miracle in itself already.

If you want to try and compile the code and run it to see the effect yourself let me know and I'll attach the source code here.

Any help is appreciated, thanks!

Edited by Suen

Share this post


Link to post
Share on other sites
Advertisement

Suen,

 

I'm just going to throw a few things at you.  I think you may be confusing yourself.  I'd rename some of your variables like 'cameraTarget' should be 'cameraTargetPos' so you clearly identity a position VS a direction vector.  It's important not to get those confused.

 

For rotation, you can just easily make a rotation matrix and multiply that by your viewMatrix.

 

I'm not familiar with the glm library, but I saw a glm::rotate function that I believe makes a rotation matrix.  Then you can just do...

 

viewMatrix = rotationMatrix*viewMatrixAxes*camPosTranslation;

 

(or put rotationMatrix at the end) -- I can't remember what way OpenGL multiplies it's matrices.

 

Good luck!

Jeff.

Share this post


Link to post
Share on other sites

Suen,

 

I'm just going to throw a few things at you.  I think you may be confusing yourself.  I'd rename some of your variables like 'cameraTarget' should be 'cameraTargetPos' so you clearly identity a position VS a direction vector.  It's important not to get those confused.

 

For rotation, you can just easily make a rotation matrix and multiply that by your viewMatrix.

 

I'm not familiar with the glm library, but I saw a glm::rotate function that I believe makes a rotation matrix.  Then you can just do...

 

viewMatrix = rotationMatrix*viewMatrixAxes*camPosTranslation;

 

(or put rotationMatrix at the end) -- I can't remember what way OpenGL multiplies it's matrices.

 

Good luck!

Jeff.

Well changing to this actually solved the problem, thanks. Thanks for the name-change suggestion, you're right that I shouldn't confuse a position with a vector, my bad.

 

But honestly I'm still wondering what exactly I'm doing wrong in the original code. I'm merely changing the position of my camera's reference(target) point. To rotate it I switch to a spherical coordinate system, alter the correct element of the coordinate and switch back to the cartesian coordinate system, and then calculate the look/forward vector. The camera position remain the same but since the camera's reference point now has changed (cameraTargetPos-cameraPos) should result in a new vector which should be rotated by some amount. Am I thinking this wrong?

 

edit: changed the name of cameraTarget to cameraTargetPos in my first post as suggested.

Edited by Suen

Share this post


Link to post
Share on other sites

About my Camera experiences:

 

  1. Put a Projection/Perspective FOV Matrix on the camera class;
  2. Try do no not expand the class for simple jobs;
  3. Always look for a simple solution for control (something like: camera->Control(Input* input, float dt) ); 
  4. Don't use Quaternions if you don't know what they do;

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement