Sign in to follow this  
edin-m

OpenGL VBO .obj .3ds and some concept questions

Recommended Posts

edin-m    508
So I have .obj file and I want to load it into opengl app using VBO and GL_ELEMENT_ARRAY_BUFFER. It's no production code so I don't care about performance (I'll consider it later). I have trouble "conveting" uv, vertex, and indices data to ones "VBO-friendly".

I'm using C++ and not much libraries. I'm learning opengl so I do not wish to use some engines or scene graphs. I understand very well C++ and pretty much opengl. I've written .obj importer and display of object but using immediate mode. Now I have switched to OpenGL 3.3 and I don't care about .obj normals (I'll consider/recalculate them later).

I have some example where .3ds loader is used and no data manipulation is done. Just loaded from .3ds to array and from arrays to opengl. My 3D content creating application doesn't have .3ds exporter but I can make one except I don't know how. I know technical details (C#, plugin writing etc...) but I don't know how to convert some vertex/uv/faces data to .3ds or to let's call it "VBO-friendly".

For example, a triangulated cube, has 8 vertices, 20 UVs and 12 faces in .obj file (I suppose same would be for my application component object model if I were writing a plugin). On the other hand .3ds file has 20 vertices, 20 UVs and 12 faces.

I understand texel and vertex array need to be same size so indices from GL_ELEMENT_ARRAY_BUFFER would reference elements of those arrays. How can I convert .obj data into .3ds/vbo-friendly data. Where do additional 12 vertices came from for that cube example. I have managed to do some kind of conversion but when I tried another model it didn't work.

I've gone through some examples and loaders but some are just too complex, and another are lame. How would you translate data from .obj to .3ds ? (That seems like the best option, I translate to .3ds style and use that data).

P.S. I have read any topic and article I could find about ".obj to opengl"

Share this post


Link to post
Share on other sites
Murdocki    285
A difference in amount of vertices while representing the same geometrical shape usually indicates these vertices contain texture coordinates or normals since these are unique for each face a vertex is part of. Sometimes texture coordinates and normals will be omitted from the vertices and will be stored elsewhere which then can be accessed by some sort of indexing.
About converting the file format i would use one of the many modelling tools available which can import / export from / to whatever you want. However, in your case it might be better to look into a file loading library. I know for example that the FBX sdk can import .obj. You could also look at assimp which seems to be doing it's job very well.
If you resent using libraries you need to learn the fileformat and write an importer by yourself, this will keep you occupied thus not learning OpenGL for a while though.

p.s. texels are texture pixels, you probably ment texture coordinates / uv's here?

Share this post


Link to post
Share on other sites
edin-m    508
Oh, yes I did. Got mixed up hehe. About fileformats I know structures of both and it's no problem writing importer/exporter. But generating data to be written is a little bit more difficult. Let's say writing a converter for .obj to .3ds. No problem reading .obj file; no problem writing .3ds file. But if I have cube that has 8 vertices in .obj file (triangulated), I don't know which 20 vertices to write to .3ds file.

Note that when I would be able to convert correctly I could override generating .3ds file and feed that data into opengl directly. I'm just using .3ds as a valid reference for data structure that can be passed to opengl directly without modification.

For cube if I have 8 vertices 20 uvs and 12 faces, I need to use those 12 faces that contain uv and vertices array indices to [i]generate [/i]total of 20 vertices and new array of 12 triangles (face in .3ds and my vbo data is triangle). Now I'm in process of learning and would like to figure this thing out without using external libraries and it would be very unpractical if I would have to rely on external 3D apps for .obj23ds conversion only. I've seen assimp but not tried it. That and OSG are my final resort and will try not to use them (what would I learn if otherwise). Going through some books and forums I'm trying to save my self hours and days of reading codebase of those two and trying to understand how they dealt with this.

Back to cube example, here's how C++ arrays look after loading .3ds file:

[img]http://imageshack.us/m/171/1906/delak.jpg[/img]

And here's .obj:

[url="http://pastebin.com/LXjbTjsX"]http://pastebin.com/LXjbTjsX[/url]

I have no problem getting that .obj to C++ arrays (vectors, anything...). Now what I need is to convert that data to data/arrays in image above (which are from .3ds file of same cube). I have tried some of mine observations but they only worked (kind of) on cube example and none other. Maybe the whole idea I have is wrong. I just want to transfer data from 3D app to opengl using VBO and glDrawElements and I thought simplest way was .obj (I don't have .3ds exporter and cannot use some other 3d app)

Share this post


Link to post
Share on other sites
edin-m    508
I think I have found a solution. Basicly all I do is go for each faceitem I test if new vertices array contains vertex from old array that has index of i-th polyface. If it does I check uvmap arrays also, and if both match I can add that old index into indices array else I add new vertex/uv to their arrays. There's gonna be more problems I expect but I've tried few models and this method works.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Partner Spotlight

  • Similar Content

    • By pseudomarvin
      I assumed that if a shader is computationally expensive then the execution is just slower. But running the following GLSL FS instead just crashes
      void main() { float x = 0; float y = 0; int sum = 0; for (float x = 0; x < 10; x += 0.00005) { for (float y = 0; y < 10; y += 0.00005) { sum++; } } fragColor = vec4(1, 1, 1 , 1.0); } with unhandled exception in nvoglv32.dll. Are there any hard limits on the number of steps/time that a shader can take before it is shut down? I was thinking about implementing some time intensive computation in shaders where it would take on the order of seconds to compute a frame, is that possible? Thanks.
    • By Arulbabu Donbosco
      There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. 
      I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
      anyone, please help me .. how to go further... to create an application like VR CAVE. 
       
      Thanks
    • By cebugdev
      hi all,

      i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
      i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
      now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
      1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
      2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
      3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
      lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
      Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
      IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
      thank you, and looking forward to positive replies.
    • By fllwr0491
      I have a few beginner questions about tesselation that I really have no clue.
      The opengl wiki doesn't seem to talk anything about the details.
       
      What is the relationship between TCS layout out and TES layout in?
      How does the tesselator know how control points are organized?
          e.g. If TES input requests triangles, but TCS can output N vertices.
             What happens in this case?
      In this article,
      http://www.informit.com/articles/article.aspx?p=2120983
      the isoline example TCS out=4, but TES in=isoline.
      And gl_TessCoord is only a single one.
      So which ones are the control points?
      How are tesselator building primitives?
    • By Orella
      I've been developing a 2D Engine using SFML + ImGui.
      Here you can see an image
      The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
      Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. 
      I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 
      3D Editor preview
      But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
      If you can provide code will be better. And if you want me to provide any specific code tell me.
      Thanks!
  • Popular Now