TRONJon

Members
  • Content count

    13
  • Joined

  • Last visited

Community Reputation

236 Neutral

About TRONJon

  • Rank
    Member

Personal Information

  1. Hey everyone, just wanted to find out how everyone else would tackle this problem in the industry... In my effort to convert my engine to newer OpenGl standards, I'm moving my text renderer from Immediate mode to using vertex buffers.   I've made a Text class which you create and pass a String to it's constructor along with other arguments... this object will generate a Vertex Array Object, and a Buffer Object and render the quads/triangles needed to draw the string of text into the buffer. This works... until I need to change the text, like with a live FPS counter for example.   I have a setValue() function/method which lets me change the value of an existing Text object, by simply generating a new vertex array, and using glBufferData to swap it in the VAO. Problem is, it's losing all of the pointers to the vertex data, I have a feeling this is because glBufferData actually replaces the buffer altogether losing the pointers...    I've tried using glBufferSubData() but that still shows the same symptoms (Quads are drawn all over the game in random places and colours, totally nonsense).     Basically, can someone tell me whats the best way they know of rendering live text in OpenGL nowadays? :)    - Jonathan
  2. Hello, I had to tackle the COLLADA problem a month ago, so I can offer my experience :)   I have 5 structs which work together: COLLADA - The main class holding everything together, contains arrays and methods to use the following classes... MESH - The class containing the vertices, texture coords, faces, and a method to render them CONTROLLER - Reads the matrices for binding and skinning ANIMATION - This simply contains the animated matrices and methods to bind them to a skeleton BONE - A simple class with 2 matrices, a local matrix, and a world matrix. These have children bones and a parent bone.   Now, to the vertex weights -   Every vertex is bound to 1 or more bones, so <vcount> is actually bone count per vertex, the following: 1, 3, 1, 2, 3, 4, 1, 2 would mean the first vertex in the mesh is bound to 1 bone, the second vertex is bound to 3 bones, etc and <v> is the actual data for that binding, using the counts listed above, <v> would be layed out like: [vertex 1: boneindex, bone weight] [vertex 2: boneindex, boneweight,      boneindex, boneweight,       boneindex, boneweight] [vertex 3: boneindex, boneweight] [vertex 4: boneindex, boneweight,     boneindex, bonewieight] Etc.   I hope I could help. My skype name is on my profile name if you need any more help!    - Jonathan
  3. I'm moving away from Immediate mode rendering and display lists, and started using vertex buffers... but I've hit a problem when it comes to rendering text.   For example when drawing my FPS meter, the value changes all the time, so currently my text render sends a texture mapped quad to the GPU for each character, and this works exactly as I want... but if immediate mode rendering is deprecated... how am I supposed to draw data that changes every frame?   Surely using buffers would be a bit overkill? What does everyone else use for this?    - Jonathan
  4. So I've started learning about ARB_Bindless_texture, allowing me to use memory pointers for textures instead of binding to the limited texture units.   I currently load my textures into the GPU, read the handle using  ARBBindlessTexture.glGetTextureHandleARB(tex[i]);   and then bind that to a sampler uniform later on when rendering. Although I get an "Invalid Oparation" opengl error when calling glUniformHandleui64ARB() to link the texture with the sampler... in the documentation, it says:   The error INVALID_OPERATION is generated by UniformHandleui64{v}ARB if the sampler or image uniform being updated has the "bound_sampler" or "bound_image" layout qualifier.   But I don't properly understand what this means... I've never heard of a 'bound_sampler' qualifier...   Please could someone more experienced help me out with this one... I'm stumped as the documentation is very limited at this point.   Jon.
  5. So recently I dropped all support for legacy openGL matrix functionality from my 3D engine, and instead of GL_MODELVIEW for example, I'm uploading my view matrices to a Uniform Buffer Object to be shared and accessed by my shaders. But as stated in a previous post, this caused a huge FPS drop (from 1000 to 100). I've managed to get this number up towards about 400 by trimming out as many matrix calls as possible (like transforming meshes before drawing, I've now transformed the vertices before uploading to the GPU), but I'm still not happy with the performance.   Would regular glUniform calls be quicker than using buffer objects, for example, when I bind a shader, I pass the current view/projection matrices as a uniform, rather than using the UBO's.   This would mean each shader would have it's own copy of the matrices rather than the global (UBO) ones...   Does anyone know if glUniform() calls are faster than UBO calls? (Which have proven to drop FPS quite significantly).   Jonathan.
  6. Thanks guys, this helps alot! It's a shame we can't have 'virtual texture units' it would solve so many issues with texturing.   Jon :)
  7. For my engine, I'm working on the terrain painting functionality in the world editor program. I'm using texture splatting to render my terrain chunks, this gives me a 4 texture limit on individual chunks... and until I find a better technique for texturing, this is how it's going to stay.   In my editor I have a large range of textures to choose from, and currently when you paint onto the terrain, the engine searches through the Chunk's 4 textures, if it finds the one that we're currently painting.. then it just adds to that channel.   But, the problem comes if we have 4 textures painted onto a chunk.. and we try painting another... currently my engine finds the texture with the least coverage and swaps that over to the one we're painting... which works, but sometimes it means swapping a texture that I'd rather not lose (it may have less coverage.. but it's more important.. like a thin path to walk on or something).   I'm just wondering how other engines /programmers address this problem of painting with limited textures, without hassling the level artist too much?   Jon
  8. So I've been upgrading my engine to use custom view matrices instead of the OpenGL gl_ModelView and gl_Projection which are deprecated in newer versions.   Now, as I'm using Shadow mapping, Skeletal Animation, and various other shader techniques, I've split my matrices into: modelMatrix viewMatrix modelViewMatrix projectionMatrix modelViewProjectionMatrix   So each time I translate() an object, or manipulate any of these matrices, all 5 are uploaded to my Uniform Buffer Object on the GPU.   And my FPS has dropped from 1200 to 170, this is unacceptable for me considering all I've done is change the matrices behind the scene. Nothing has changed in the engine itself.   Can someone tell me what has caused the drop in performance? I'm guessing it's something along the lines of: - My matrix operations in Java are slow - Uploading 5 matrices regularly is using up my bandwidth?
  9. OpenGL GLSL Shared View Matrix

    Fantastic, thanks guys! :)
  10. I'm upgrading all of my OpenGL code to support the newer specifications, as a result I don't want to use the standard matrices in my GLSL code, such as gl_ModelViewMatrix or gl_ModelViewProjectionMatrix.. I wish to use my own matrices, except the problem I see is I'm using many shaders in my engine. It would be counter productive to upload a seperate set of uniform view matrices to every shader therefore I would need to have shared matrices.   I know about Uniform Buffer Objects and Uniform Blocks... but have not use them yet, are these the only way of sharing data between different shaders? I'm just looking for someone with experience to shed some light on the subject.   Many thanks, Jon.
  11. Thank you! I had a feeling someone would say this :)
  12. My project I'm working on currently uses features such as Shadow mapping & Skeletal animation... and I'm using alot of OpenGL 4.X commands. I figure it's about time I move on with newer standards and start passing my own matrices to my GLSL shaders instead of using GL_MODELVIEW and such.. this works for situations in which I position my camera.. generate the matrix... upload to the GPU via a GLSL uniform and then draw my scene.   But what if i'm drawing particles or lots of objects moving about?   Usually I'd do something like: for(Entity e: somelist) {     glTranslate(e.x, e.y, e.z);    e.drawModel(); }   but How would I do this using opengl 4.X standards? Would I have to upload a new matrix before drawing every entity instance?   Thanks, Jon
  13. Hey everyone, I'm new to these forums... so please be gentle! :)   I'm just wondering if anyone has put thought towards / knows the solution to displaylists with the new openGL standards coming up in future versions.   What I mean is, currently we use glTranslate and glRotate etc when positioning our camera in 3D space... but I personally also use them when rendering geometry into a display list.   If perfectly at peace with losing these functions and working out our own custom matrices... but what if we need to modify the matrix during a list call? displaylists will only store openGL commands, ignoring my own matrix modifications?   Is the solution simply to make sure that calls to glVertex3f() are already transformed within the geometry without using a matrix within the displaylist?     Jon