Sign in to follow this  
BladeWise

OpenGL Generating texture coordinates

Recommended Posts

BladeWise    128
Ok, developing and developing, it's time for texturing... I read something about texture coordinates all over the net, but many questions are still without an answer... 1. glTexGen* should be used to generate texture coodinates, but how I could use this 'offline', to save texture coordinates for a mesh? I mean, I thought I had to pass to a function a texture size (width, height), vertex coordinates (x,y,z) and a mode (for cube/sphere/etc mapping) to retrieve correct (u,v)... but it seems glTexGen* doesn't work as I expected... I suppose glTexGen* are used to specify what kind of coordinates I have to generate, while glTexGen*v should return values... but if it's the case... how should I specify what's the vertex to map in texture space? :( 2. Is it possible to use glTexGen* with VertexArrays? I suppose so, but I can't figure how... 3. Is there a tutorial/example about glTexGen*? 4. Rendering animated meshes, I should compute 'offline' texture coordinates for each key frame, or it's better to let OpenGL generate them at runtime (if I didn't misunderstood glTexGen* usage... and it could be so...)? I find texturing not so easy as I thought, indeed... :P

Share this post


Link to post
Share on other sites
BladeWise    128
Ok, I finally decided to consult my RedBook, founding some useful information... without clearing all my dubts... this is my first problem... RedBook says:

--------------------------- RED BOOK -----------------------------------
void glTexGen{ifd}(GLenum coord, GLenum pname, TYPEparam);
void glTexGen{ifd}v(GLenum coord, GLenum pname, TYPE *param);

Specifies the functions for automatically generating texture coordinates.

The first parameter <...>

The pname parameter is GL_TEXTURE_GEN_MODE, GL_OBJECT_PLANE, or GL_EYE_PLANE.
If it's GL_TEXTURE_GEN_MODE, param is an integer (or, in the vector version
of the command, points to an integer) that's either GL_OBJECT_LINEAR,
GL_EYE_LINEAR, or GL_SPHERE_MAP. These symbolic constants determine which
function is used to generate the texture coordinate.

With either of the other possible values for pname, param is a pointer to an array of values (for the vector version) specifying parameters for the
texture-generation function.
-----------------------------------------------------------------------
So, it says that I can call glTexGen* or glTexGen*v, depending on my needs:
I can use the first with GL_TEXTURE_GEN_MODE, specifying next what kind of mapping I intend to do, and setting up texture plane by the second function (the one accepting vectors).
Now, reading I assumed that I could use glTexGen*(GL_S,GL_OBJECT_PLANE,...) [the not-vector form] but MSDN says definitely NOT:

--------------------------- MSDN ------------------------------------
void glTexGen*( GLenum coord, GLenum pname, GLdouble param);

pname The symbolic name of the texture coordinategeneration function. Must be GL_TEXTURE_GEN_MODE.
---------------------------------------------------------------------

If my understanding are correct, MSDN is right, I can't call the not-vector-based function with a parameter different from GL_TEXTURE_GEN_MODE, since there is no int parameter associated to GL_OBJECT_PLANE or GL_EYE_PLANE...
am I correct about this?
So, the not-vector-based form, is only used to specify the texture coordinates generation function, the vector-based one is needed to define clip planes...

And here lies my other dubt... how should I define this planes? To generate correct texture coordinates for a complex mesh (like a human body), what kind of planes I should define and, above all, why? I'd like to understand a bit more about it...

Moreover, rendering dynamic scenes, is it too expansive to call glTexGen* every frame? Is it better to precompute texture coordinates (it can be easily done for object coordinates, but doing it for view coordinates is not so good, since every time modelview matrix changes I should recompute coordinates... --') or let OpenGL do this work for me? How professional programs handle this, if anybody knows (Since this is my first attempt to create a 3D engine, I don't want to create a professional product, but I'd like to know a bit more about this scenario)?

Thnx in advance :D

Share this post


Link to post
Share on other sites
noNchaoTic    343
If your making a 3d engine that uses polygonal models I dont see why would you need glTexGen, as all the uv's are stored in the model format and you can just store them in your vertex class or structure (or in some other manner).

Share this post


Link to post
Share on other sites
BladeWise    128
Yes, but what if I would deal with dynamic meshes? In such a case texture coordinates should be calculated every time the mesh changes it's shape... Am I wrong? Moreover, what if the model I loaded has no texture coordinates? I would like to be able to generate them... that's why I'm asking informations about glTexGen, to go deeper and learn how to calculate them... that's why I'm asking now, how to define (what equation) texture planes... I know I can "copy" the function provided in the docs to calculate texture coordinates *offline* now, but I'd like to understand well what are that planes for, understanding the underlying math... :P

Share this post


Link to post
Share on other sites
OrangyTang    1298
TexGen is not magic, it can't map textures intelligently to any random animated mesh and have it work with whatever texture you provide. With models you'll want to load in texture coordinates that have been manually setup for use with a specific texture. Texture coordinates are usually static for animated meshes, unless you're specifically scrolling or warping the texture as well (like for water flow).

TexGen is useful when you want to do on-the-fly placement of textures on existing geometry (like projecting flashlight textures or shadows) or when you want to add an environment map which constantly changes depending on the orientation of the model.

Share this post


Link to post
Share on other sites
BladeWise    128
Thanks, this is clear now... anyway, if someone could point me to an article/paper/book, about texture mapping theory, I would be glad :D

Share this post


Link to post
Share on other sites
zedzeek    529
orangytang is correct there is no magic solution getting texturecoordinats for a model is a timeconsuming + boring task
your best bet for info is a modelling site eg try www.polycount.com

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By pseudomarvin
      I assumed that if a shader is computationally expensive then the execution is just slower. But running the following GLSL FS instead just crashes
      void main() { float x = 0; float y = 0; int sum = 0; for (float x = 0; x < 10; x += 0.00005) { for (float y = 0; y < 10; y += 0.00005) { sum++; } } fragColor = vec4(1, 1, 1 , 1.0); } with unhandled exception in nvoglv32.dll. Are there any hard limits on the number of steps/time that a shader can take before it is shut down? I was thinking about implementing some time intensive computation in shaders where it would take on the order of seconds to compute a frame, is that possible? Thanks.
    • By Arulbabu Donbosco
      There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. 
      I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
      anyone, please help me .. how to go further... to create an application like VR CAVE. 
       
      Thanks
    • By cebugdev
      hi all,

      i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
      i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
      now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
      1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
      2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
      3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
      lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
      Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
      IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
      thank you, and looking forward to positive replies.
    • By fllwr0491
      I have a few beginner questions about tesselation that I really have no clue.
      The opengl wiki doesn't seem to talk anything about the details.
       
      What is the relationship between TCS layout out and TES layout in?
      How does the tesselator know how control points are organized?
          e.g. If TES input requests triangles, but TCS can output N vertices.
             What happens in this case?
      In this article,
      http://www.informit.com/articles/article.aspx?p=2120983
      the isoline example TCS out=4, but TES in=isoline.
      And gl_TessCoord is only a single one.
      So which ones are the control points?
      How are tesselator building primitives?
    • By Orella
      I've been developing a 2D Engine using SFML + ImGui.
      Here you can see an image
      The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
      Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. 
      I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 
      3D Editor preview
      But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
      If you can provide code will be better. And if you want me to provide any specific code tell me.
      Thanks!
  • Popular Now