• Content count

  • Joined

  • Last visited

Community Reputation

152 Neutral

About tempvar

  • Rank
  1.   Not only is glBufferSubData unnecessary in this case (as samoth explains), but I don't think this line does what you think it does... Wouldn't this line risk a seg. fault for trying to read from a nullptr? Maybe the driver checks it so its ok, but I see no mention in the OpenGL docs of this method supporting giving it a nullptr for data...     Yeah I don't need to call it anymore for removing characters, if I was removing from the middle or something I'd probably need to call it to shift the rest of the data back. If editing rather than removing I can simply just update the texture coords of those verts. 
  2. Hmmm well I'm an idiot.   I fixed it. I was not updating the size in the call to glDrawArrays(...), so it was still drawing the size of the vertices before I had typed a backspace...   Time for sleep :)
  3. Hey guys, I'm currently working on making a text drawing module in my project. It's working for the most part but i'm having some trouble when trying to 'backspace' characters (removing a quad from the back of the buffer).   So at the moment this is how I'm doing things  1. Create a buffer object with a size enough to hold 120 or so characters (a character is a quad made up of 6 vertices) 2. When the user presses a key, add 6 new vertices to the end of the vbo (with correct tex coords for the character) using glBufferSubData   This works fine and I can add those 120 characters by pressing letters on the keyboard (It re-builds the buffer by another 120 characters when you reach the limit) Now the problem is I want to erase a character when the backspace (39) key is pressed It doesnt visually appear to have an effect until you add another character.   Example use case: - Press 'a'  - Press 'b' - Press 'c' - Press 'd'   My buffer contains 4 quads (each quad is 6 vertices) and on screen it prints "abcd" fine.  Now If i press backspace once it should read - "abc" but it still reads - "abcd" However if I add a new character say 'e' (after backspace) the text becomes - "abce". Which leads me to believe it did in fact remove the last quad from the buffer BUT was not updated till more data was written to that area of memory.  So it really did erase that character but it doesnt get updated till you add another quad This is how I'm updating my buffer after a backspace:   void Text2D::PopLetter() { glBindBuffer(GL_ARRAY_BUFFER, m_vbo->vData->m_vboId); // remove 6 vertices from the end of the vector m_vbo->vData->vertices.pop_back(); m_vbo->vData->vertices.pop_back(); m_vbo->vData->vertices.pop_back(); m_vbo->vData->vertices.pop_back(); m_vbo->vData->vertices.pop_back(); m_vbo->vData->vertices.pop_back(); int currSize = m_vbo->vData->vertices.size() * sizeof(Vertex); int amountToOverwrite = sizeof(Vertex) * 6; glBufferSubData(GL_ARRAY_BUFFER, currSize, amountToOverwrite, nullptr); }   So after this function call, the last quad is still visible, yet when I add another character the old one is just replaced with the new one if that makes sense.   Is there a better way to do this?   Thanks guys
  4.   THAT WAS IT! Thank you very much, can't believe I didn't consider that.
  5. Tried GL_DYNAMIC_DRAW but no change. 
  6. Hey,   The past couple of days I've been running into some troubles attempting to update my VBO. I've taken the approach of allocating extra size at the start, so that when I need to add some more sprites in there is room.    Basically I'm doing things like this:   Create all the sprites I need at the start and put them in a VBO, but make sure the VBO has space for extra sprites   // all my sprite data currently resides here std::vector<Vertex> m_vertices; // create an opengl VBO glGenVertexArrays(1, &m_vaoId); glBindVertexArray(m_vaoId); // times 6 because there are 6 verts for a quad of two triangles int size = (m_vertices.size() + (numExtraSprites * 6)) * sizeof(Vertex); glGenBuffers(1, &m_vboId); glBindBuffer(GL_ARRAY_BUFFER, m_vboId); glBufferData(GL_ARRAY_BUFFER, size , NULL, GL_STATIC_DRAW); glBufferSubData(GL_ARRAY_BUFFER, 0, m_vertices.size() * sizeof(Vertex), &m_vertices[0]); glEnableVertexAttribArray(0); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), BUFFER_OFFSET(0)); glEnableVertexAttribArray(1); glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), BUFFER_OFFSET(12));   Now this is fine, everything works perfectly and draws as expected.   I've got it setup so that when i click with the mouse, it will add another sprite of the same type at the mouse location.    This just involves me creating a temporary std::vector<Vertex> with 6 new vertices in it positioned at the mouse location.  Then I attempt to update my VBO.   void SpriteBatch::Update() { int offset = m_vertices.size() * sizeof(Vertex); int size = sizeof(Vertex) * 6; // 6 because there is one more sprite to be updated glBindBuffer(GL_ARRAY_BUFFER, m_vboId); glBufferSubData(GL_ARRAY_BUFFER, offset, size, &m_tempSpriteVec[0]); // m_tempSpriteVec holds the new 6 verts m_vertices.insert(m_vertices.end(), m_tempSpriteVec.begin(), m_tempSpriteVec.end()); m_tempSpriteVec.clear(); }   This does not work, even though the vbo should have sufficient size? I've noticed if I just update one of the existing sprites (say the first 6 vertices of the vbo) then It works, but shouldn't it be big enough for the new data?   Thanks guys
  7.   Hey, thank you for your reply. Sorry it took me a couple days to get back to you.   I managed to get it working, but not exactly 100% how you've shown me.    Matrix4x4 createView(Camera* cam) { Matrix4x4 cameraMat; Matrix4x4 view; Vector3 rowZ; Vector3 row4 = { cam->pos.x, cam->pos.y, cam->pos.z }; Vector3 rowX; Vector3 rowY; Vector3 target; target = addVec(&cam->pos, &cam->dir); rowZ = sub(&target, &cam->pos); normalise(&rowZ); setEqual(&rowY, &cam->up); rowX = crossProd(&rowY, &rowZ); normalise(&rowX); rowY = crossProd(&rowZ, &rowX); normalise(&rowY); setRowv(&cameraMat, 0, &rowX, 0.0f); setRowv(&cameraMat, 1, &rowY, 0.0f); setRowv(&cameraMat, 2, &rowZ, 0.0f); setRowv(&cameraMat, 3, &row4, 1.0f); view = inverseMat4(&cameraMat); // has to be inverse for camera transformations return view; }   I've changed a couple things such as passing in the camera and accessing the data from that, just easier. I was still getting weird results using setColv function, so I fixed up the setRowv function and that is what has made it work for me (not entirely sure why because setColv should be working anyway).   void setRowv( Matrix4x4* m, int rowNum, Vector3* v, float w ) { int index1 = (0 * 4) + rowNum; int index2 = (1 * 4) + rowNum; int index3 = (2 * 4) + rowNum; int index4 = (3 * 4) + rowNum; m->m.m1[index1] = v->x; m->m.m1[index2] = v->y; m->m.m1[index3] = v->z; m->m.m1[index4] = w; }   Also I found out that changing the cam->up vector was actually causing weird rotations. But as long as I calculate the rowX as the cross product of the up vector (0, 1, 0) it works. If the up vector is set to the new rowY it gets messed up and I seem to experience some roll when moving the camera around.   Thank you for your help and explanations, I've think this should be ok from here, feel free to chime in if you notice why its working like this
  8. Ah ok, makes sense. I've made those changes and I can see my object again. It's rotating weirdly though as I look around the screen and if I translate the view changes and then after a second or so it disappears. Also my object is a terrain, but it looks like a flat plane currently which is oriented about the up axis, not on the XZ plane like before. I'll post my changes but I'm sure they adhere to your critiques. Matrix4x4 createView( Vector3* pos, Vector3* target, Vector3* up ) { Matrix4x4 cameraMat; Matrix4x4 view; Vector3 rowZ; Vector3 row4 = { pos->x, pos->y, pos->z }; Vector3 rowX; Vector3 rowY; setEqual(&rowY, up); rowZ = sub(target, pos); normalise(&rowZ); rowX = crossProd(&rowY, &rowZ); normalise(&rowX); rowY = crossProd(&rowX, &rowZ); // new rowY normalise(&rowY); setColv(&cameraMat, 0, &rowX, 0.0f); setColv(&cameraMat, 1, &rowY, 0.0f); setColv(&cameraMat, 2, &rowZ, 0.0f); setColv(&cameraMat, 3, &row4, 1.0f); view = inverseMat4(&cameraMat); // has to be inverse for camera transformations return view; } I'm doing translations in my update loop before I call updateCam(&cam); They are just simple changes to the camera position to moving in 6 directions, here's an example: if (glfwGetKey('A')) { addVec3f(&cam.pos, 0.5f, 0.0f, 0.0f); cam.needUpdate = 1; } if (glfwGetKey('W')) { addVec3f(&cam.pos, 0.0f, 0.0f, 0.5f); cam.needUpdate = 1; } void addVec3f( Vector3* vec, float x, float y, float z ) { vec->x += x; vec->y += y; vec->z += z; }
  9. Okay. No problem. And the implementation of sub(...) works as expected. The implementation is okay. However, the invocation is still questionable. Remember that the cross-product is not commutative, because a x b = - ( b x a ) Hence the wrong order will yield in the reverse direction vector. IMHO you need to compute forward vector cross up vector to yield in the side vector, but you compute up vector cross forward vector and yield in the negative side vector. Please check this. Nonetheless, what happens "a little further down" doesn't interest the code above. Normalizing a vector means to keep its direction and ensure a length of 1. Now rowY is [0,0,0] at the moment of normalization, and hence has no non-vanishing direction and cannot be enlarged to have a length of 1. So normalize(&rowY) must die with an error like "division by zero". That is the point I made. Not sure if I understand you answer here, so I explain in detail what I mean. The orientation matrix you want to compute has the requirement that each row/column has a length of 1 and each pair of them is orthogonal. Your forward vector and up vectors are not necessarily orthogonal when createView is invoked, but the cross product of those 2 vectors will be orthogonal to both. So after the first cross product only 2 of the 3 required orthogonalities are guaranteed. Hence you need to compute a second cross-product to guarantee that the up vector is orthogonal to the other vectors, too. In summary: side := forward x up new_up := side x forward First, "column-major" is a term that describes how the elements of the 2D matrix construct are arranged linearly in 1D memory. That is not the topic I meant, although you have to consider it in your routines, of course. What I meant is whether you use column-vectors, so that you do a matrix vector product like so p' := M * p or else row-vectors, so that you compute the same matrix vector product like so q' := q * N because there is a mathematical correspondence named the "transpose" p := qt M := Nt between them. Looking at a pure orientation matrix gives Rt = R-1 so that confusing row and column vectors gives the inverse rotation. Confusing them when dealing with the position sub-matrix is even worse. When you're using column-vectors (what is usual in OpenGL) then side, up, forward, and location vectors have to be set as columns of the matrix. Now coming to column-major (what is also usual in OpenGL): Make sure that the linear index is computed as index = row * 4 + column and it should work. Yes, it is the view matrix.   Ok I've revised my view matrix function and the setRowv function.   Matrix4x4 createView( Vector3* pos, Vector3* target, Vector3* up ) { Matrix4x4 cameraMat; Matrix4x4 view; Vector3 rowZ; Vector3 row4 = { pos->x, pos->y, pos->z }; Vector3 rowX; Vector3 rowY; setEqual(&rowY, up); rowZ = sub(target, pos); normalise(&rowZ); rowX = crossProd(up, &rowZ); normalise(&rowX); rowY = crossProd(&rowX, &rowZ); // new rowY normalise(&rowY); setRowv(&cameraMat, 0, &rowX, 0.0f); setRowv(&cameraMat, 1, &rowY, 0.0f); setRowv(&cameraMat, 2, &rowZ, 0.0f); view = inverseMat4(&cameraMat); // has to be inverse for camera transformations setRowv(&cameraMat, 3, &row4, 1.0f); return view; }     I set the position row after I calculate the inverse so that the inverse part will only be calculated for the rotation correct?   setRowv function is now this: void setRowv( Matrix4x4* m, int rowNum, Vector3* v, float w ) { int index = rowNum * 4 + 0; m->m.m1[index] = v->x; m->m.m1[++index] = v->y; m->m.m1[++index] = v->z; m->m.m1[++index] = w; }   I can see some geometry now but it doesn't resemble anything like I had before. At least before I could see what the object was and it was just translating incorrectly. 
  10.   1. Thanks, I will rename it to "target" 2. I guess so 3. Because i'm using C I can't overload operators.     Vector3 sub(Vector3* lhs, Vector3* rhs) { Vector3 vec = { lhs->x - rhs->x, lhs->y - rhs->y, lhs->z - rhs->z }; return vec; } 4. It should be fine, here's the code: Vector3 crossProd(Vector3* lhs, Vector3* rhs) { Vector3 cross = { (lhs->y * rhs->z) - (lhs->z * rhs->y), (lhs->z * rhs->x) - (lhs->x * rhs->z), (lhs->x * rhs->y) - (lhs->y * rhs->x) }; return cross; }   5. A little further down in the function I call setEqual which initialises the left parameter to the right parameter; In this cals rowY gets set to the up vector. 6. I'll need to change this because my rowX is currently the cross of up and rowZ. 7. I think I might be treating rows as columns because opengl is column-major.    typedef struct { union { float m1[16]; float m2[4][4]; } m; } Matrix4x4;   void setRowv( Matrix4x4* m, int rowNum, Vector3* v, float w ) { m->m.m2[0][rowNum] = v->x; m->m.m2[1][rowNum] = v->y; m->m.m2[2][rowNum] = v->z; m->m.m2[3][rowNum] = w; } 8. Brain exploded! So it should just be called the view matrix then?   Thanks for your help, hope I provided enough info.
  11. I don't think that's the problem because the dir that I pass in (shown in the update function) is dir + pos. So the subtraction of that sum and the position gives the actual direction.
  12. Hey guys, I'm attempting to roll my own matrix and vector mathematics to do my transformations and creating MV and MVP matrices to send to my shaders. I've managed to get perspective working but I'm having a few problems with my view matrix. What is working: I can look around the world with the camera and everything looks and moves fine. What's the problem: As soon as I begin to translate my view matrix I get really weird behaviour which I find hard to explain. When I start moving around the world The camera seems to change the direction its looking at. This is my create view code:   Matrix4x4 createView( Vector3* pos, Vector3* dir, Vector3* up ) { Matrix4x4 mat = identity(); Matrix4x4 invView = identity(); Vector3 rowZ = sub(dir, pos); Vector3 row4 = { pos->x, pos->y, pos->z }; Vector3 rowX = crossProd(up, &rowZ); Vector3 rowY = {0}; normalise(&rowZ); normalise(&rowX); normalise(&rowY); setEqual(&rowY, up); setRowv(&mat, 0, &rowX, 0.0f); setRowv(&mat, 1, &rowY, 0.0f); setRowv(&mat, 2, &rowZ, 0.0f); setRowv(&mat, 3, &row4, 1.0f); invView = inverseMat4(&mat); // has to be inverse for camera transformations return invView; }   My matrix inverse code is from Mesa3D glu implementation (http://stackoverflow.com/questions/1148309/inverting-a-4x4-matrix)   And here is where the camera gets updated   void updateCam( Camera* cam ) { Vector3 sum = addVec(&cam->dir, &cam->pos); cam->view = createView(&cam->pos, &sum, &cam->up); }   For translation I'm just adding directly to the cam.pos vector.   Rotation seems to work fine but here it is anyway:   void rotateView( Camera* cam, float yawAmt, float pitchAmt) { float cosPitch; float cosYaw; float sinPitch; float sinYaw; cam->yaw += yawAmt; cam->pitch += pitchAmt; sinYaw = sin(degToRad(cam->yaw)); sinPitch = sin(degToRad(cam->pitch)); cosYaw = cos(degToRad(cam->yaw)); cosPitch = cos(degToRad(cam->pitch)); cam->dir.x = cosPitch * sinYaw; cam->dir.z = cosPitch * cosYaw; cam->dir.y = sinPitch; }   Thanks for the help, I'm sure the problem lies within the create view function.    
  13. Second Shader not working

    Ah found the problem, the second shader was getting unbound somewhere in another function. Good ole debugging.
  14. Second Shader not working

    Just tried that, didn't change anything but I should keep them as GLuint because that's what they should have been anyway. I stepped through setting up uniforms for the second created shader. It had a program ID of 6, which is correct (first shader has 1,2 for vert and frag and 3 for programID and same with the second). I then call [CODE] GLuint uniformId = glGetUniformLocation(shader2, "MVP"); glUniformMatrix4fv(uniformId, 1, GL_FALSE, &mat[0][0]); [/CODE] uniformId ends up equal to some absurdly large integer. The shader does contain the uniform mat4 MVP variable. If it was shader1 that was bound, this would work fine and uniformId wouldn't be some absurdly large number. I could just swap the diffuse and passthrough shader around and i'd get the same problem, only the first created one will work.
  15. Hey everyone, I've got a strange problem most likely tied to my inexperience with using shaders. Basically only my first shader program seems to work. I'm doing error checking with glGetShaderInfoLog etc and the shaders seem to compile fine. I've also been trying to print out the active uniform variables and it will only ever work for the first shader that I create. Some pseudo code for my program flow: [CODE] LoadAllMeshes(); int shader1 = CreateShader("passthrough.vert", "passthrough.frag"); int shader2 = CreateShader("diffuse.vert", "diffuse.frag"); glUseProgram(shader1); void SetUniforms() { // set uniforms here for shader1 or 2, depending on which is bound } void PrintActiveUniforms() { // print active uniforms here. If shader1 is bound, the uniforms are printed fine. If shader2 is bound the uniforms do not print out. } void Render() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); RenderAllGeometry(); glfwSwapBuffers(); } void RenderAllGeometry() { for (int i = 0; i < meshes.size(); i++) { GLuint vao = meshes.at(i)->getVao(); glBindVertexArray(vao); glDrawElements(GL_TRIANGLES, mesh->GetIndexBuffer().Size(), GL_UNSIGNED_INT, BUFFER_OFFSET(0)); glBindVertexArray(0); } } [/CODE] This all works perfectly if it is shader1 that is bound, but shader2 being bound results in nothing being drawn. I've tested both shaders and they work, but it's only ever the first program created that will result in the geometry rendering with that shader. I can't see why this isn't working and I can post more code if needed. But basically my shader loading code is taken from a tutorial website and I can actually get something drawing with the first bound shader. The only thing I can think of is that vertex array objects might have an effect on the shaders used? Thanks for the assistance in advance.