Sign in to follow this  
OandO

OpenGL Couple of Issues with VBOs

Recommended Posts

OandO    1566
So I've finally got enough time on my hands to start properly learning to use OpenGL, but VBOs are throwing me a little bit. I wrote some model and texture loading code for a single object OBJ file and a targa file and got that all to render fine. However it's pretty inefficient, there's a fair bit of duplication which I need to find a way of getting rid of. After getting basic load-and-draw functionality working I started looking into VBOs. Here I'm hitting on another problem: although I'm using the same data and indexing, my normals and texture coords screw up. Both of these problems stem from the fact that I can't quite pin down how I'm supposed to be indexing the data in any usable, efficient way. I get the principles behind it, but I still find OpenGL's handling of buffers a little vague.

Mesh loading and buffer generation, called once when program is started: [url="http://pastebin.com/BEy2RCSc"]http://pastebin.com/BEy2RCSc[/url] (commented some dodgy stuff I need help with)
Render, called repeatedly: [url="http://pastebin.com/RAvSHNPc"]http://pastebin.com/RAvSHNPc[/url] (I think this is probably fine, but just in case)

Just to visualise the problem I'm having, this is the mesh loaded the old way, without VBOs:
[url="http://www.youtube.com/watch?v=3BFUrZobhg8"]http://www.youtube.c...h?v=3BFUrZobhg8[/url]

And with the alterations for VBOs:
[url="http://img43.imageshack.us/img43/8315/screenshot20110603at232.png"]http://img43.imagesh...110603at232.png[/url]
[url="http://img849.imageshack.us/img849/8315/screenshot20110603at232.png"]http://img849.images...110603at232.png[/url]

As you can see, the vertices seem to be fine, but the normals and texture coords are off.

Share this post


Link to post
Share on other sites
Trienco    2555
The first few lines are already confusing me and the code looks like you seem to believe that a Vector and a Vertex is the same kind of thing. Otherwise it makes zero sense to allocate numNormals*sizeof(Vertex) for your normals.

A vector is a bunch of coordinates, a vertex is a collection of a bunch of attributes like position, normal, texcoord, color, etc... if one of them is different it's not the same vertex. Don't try anything weird like using one normal for multiple vertices. You can do that in immediate mode, but not with buffers (at least not without trickery). If you feel like trying out something new, maybe you can put something together using this:
[url="http://www.opengl.org/registry/specs/ARB/instanced_arrays.txt"]http://www.opengl.or...nced_arrays.txt[/url]

As far as I understand, the index for each attribute stream is divided by a number you can specify yourself, so technically you could reuse a normal or texcoord if you get the order of all attributes right. Trying a thought experiment for cubes (again, assuming I understand the extension correctly):

normals[6] = {n1, n2, n3, n4, n5, n6} //setting a divisor of 4
texcoord[4] = {t1, t2, t3, t4} //setting a divisor of 6
position[6*4] = {p1, p2, p3, p4, p5, p6, p7, p8, p9, p10, p11, p12, p13, p14, p15, p16, .... } //setting a divisor of 1, each 4 entries make up one side of the cube
indices[6*4] = { 0,1,2,3, 4,5,6,7, 8,9,10,11, 12,13,14,15, .... }

Now when sending your draw call (as quads for simplicities sake), you would get the following vertices:

n1, t1, p1
n1, t1, p2
n1, t1, p3
n1, t1, p4

n2, t1, p5
n2, t1, p6

So after a bit of fiddling I don't think you can fix the order of the attributes to make it work. If you can find an arrangement that works (or even one that works for generic models) you're a better man than me. Rule of thumb: if you have a sharp edge, you can not share vertices.

Share this post


Link to post
Share on other sites
OandO    1566
Alright. OBJ files store the attributes separately, so there's a list with all the coordinates of the vertices, then list of all the texture coordinates etc. which I think is probably what caused my confusion. Thanks for the advice, I'll take a crack at this ASAP.

Share this post


Link to post
Share on other sites
Xycaleth    2391
I didn't go through all the code, but these few lines of code are wrong:

[code] memcpy(mesh->vbo_buffer, mesh->vertices, vertex_size);
memcpy(mesh->vbo_buffer + normal_size, mesh->meshNormals, normal_size);
memcpy(mesh->vbo_buffer + normal_size + texturecoord_size, mesh->meshTexCoords, texturecoord_size);[/code]
The offset for 2nd line should be vertex_size, and the offset for the 3rd line should be vertex_size + normal_size. When you load the positions (in your code, mesh->vertices) in, the end of that array will be mesh->vertices + vertex_size, which gives you the offset for the 2nd line. And same applies for the 2nd line, to get the offset for the 3rd line.

So the code should be:


[code] memcpy(mesh->vbo_buffer, mesh->vertices, vertex_size);
memcpy(mesh->vbo_buffer + vertex_size, mesh->meshNormals, normal_size);
memcpy(mesh->vbo_buffer + vertex_size + normal_size, mesh->meshTexCoords, texturecoord_size);[/code]

Share this post


Link to post
Share on other sites
OandO    1566
/facepalm

I was thinking that myself earlier, could have sworn I'd corrected it... Apparently not.

Edit: Well now it draws perfectly, and I feel a little embarrassed.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By pseudomarvin
      I assumed that if a shader is computationally expensive then the execution is just slower. But running the following GLSL FS instead just crashes
      void main() { float x = 0; float y = 0; int sum = 0; for (float x = 0; x < 10; x += 0.00005) { for (float y = 0; y < 10; y += 0.00005) { sum++; } } fragColor = vec4(1, 1, 1 , 1.0); } with unhandled exception in nvoglv32.dll. Are there any hard limits on the number of steps/time that a shader can take before it is shut down? I was thinking about implementing some time intensive computation in shaders where it would take on the order of seconds to compute a frame, is that possible? Thanks.
    • By Arulbabu Donbosco
      There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. 
      I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
      anyone, please help me .. how to go further... to create an application like VR CAVE. 
       
      Thanks
    • By cebugdev
      hi all,

      i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
      i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
      now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
      1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
      2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
      3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
      lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
      Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
      IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
      thank you, and looking forward to positive replies.
    • By fllwr0491
      I have a few beginner questions about tesselation that I really have no clue.
      The opengl wiki doesn't seem to talk anything about the details.
       
      What is the relationship between TCS layout out and TES layout in?
      How does the tesselator know how control points are organized?
          e.g. If TES input requests triangles, but TCS can output N vertices.
             What happens in this case?
      In this article,
      http://www.informit.com/articles/article.aspx?p=2120983
      the isoline example TCS out=4, but TES in=isoline.
      And gl_TessCoord is only a single one.
      So which ones are the control points?
      How are tesselator building primitives?
    • By Orella
      I've been developing a 2D Engine using SFML + ImGui.
      Here you can see an image
      The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
      Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. 
      I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 
      3D Editor preview
      But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
      If you can provide code will be better. And if you want me to provide any specific code tell me.
      Thanks!
  • Popular Now