Sign in to follow this  
Gazoo101

OpenGL Referencing smaller textures within a big one

Recommended Posts

Gazoo101    100
Hey GameDev,

I have texture related question I'm unsure can be solved without modifying my current data. So - I basically have quite a large array - approximately 1024 * 64 (* 3 for RGB), where each of the 64 parts represent a small 8 by 8 texture. I wish to visualize 1024 quads, each with its own unique tiny texture on it. Now, the data is stored sequentially, so one complete 8x8(x3) Texture followed by another. If I were to upload each texture separately, this could be accomplished quite easily by just pointing to the correct place in the array and then specifying a height/width of 8 to OpenGl and voila, a small square texture. However, this would result in a total of 1024 textures on the card. What's more is that I will be increasing the number of quad textures up to 16384...

I am assuming its unfeasible to upload that many textures, so instead uploading a huge texture with all of them in it might be better? But it seems to be that I must restructure the data so it's no longer sequential, but in "blocks" next to each other, so I can use the texture coordinates to properly isolate one proper texture to each small quad...

Does this sound wise? I would prefer not to have to restructure the data, but right now I am unsure if it is at all possible without re-touching the data...

Regards,
Gazoo

Share this post


Link to post
Share on other sites
haegarr    7372
Using a texture atlas: 1024 patches arranged in a square makes a 32 x 32 patch layout. Each patch has 8 x 8 texels, making a total texture size of 256 x 256 texels; that is no problem. OpenGL supports some juggling by using the glPixelStore command w/ the GL_UNPACK_* set of parameters. However, I'm under the impression that those isn't sufficient for you to avoid reordering the data (but you should check it yourself). However, couldn't you do the reordering programmatically after loading the data from mass storage and before uploading it to the GPU? You have to adapt the texture co-ordinates anyway. But be aware that such a dense pacing of patches into a texture atlas may produce visible artifacts (e.g. seams) in situations where you enforce the texture sampler to sample texels of neighbored patches (may happen e.g. if you use rotated quads).

Another possibility is to use texture arrays. AFAIK your current data layout is okay for such a texture. However, support for texture arrays isn't guaranteed these days. And you still have to adapt the texture co-ordinates, this time not in the u,v dimensions but you have to add a 3rd coefficient to address the texture's slice.

Share this post


Link to post
Share on other sites
V-man    813
It depends. Are you going to render with those 16384 at the same time. If the answer is no and you'll only use some of them then there will be no performance problem.

If the answer is yes, then you can do as you say. Texture Atlas. If you will use mipmapping, then it can be a problem on lower mipmaps. You would have to prepare mipmaps yourself and not use glGenerateMipmap (GL 3.0) (or the GL 1.4 equivalent).

Then there is texture arrays but I'm not sure what the depth of a texture array is these days. This requires GL 3.0.

Share this post


Link to post
Share on other sites
Gazoo101    100
Hey haegarr and V-man,

Thanks for taking your valuable to render me a reply. I appreciate it! I will not need to render them all at the same time, but after thinking about it for a short while, it hit me - why not just treat the entire texture as a 3D texture? I will always be rendering a small sub portion in a raw (non-mipmapped) format. It's excellent. That way I get the benefit of a Texture atlas without the need to restructure my data. Yay!

Gonna get right on implementing that...

Regards,
Gazoo

Share this post


Link to post
Share on other sites
jeroenb    282
I am not sure if I understand your question correctly, but in my 2D engine I render smaller tiles from a bigger tilemap texture by generating texture coordinates that represent each tile in the map. I then use these coordinates to render each tile (quad) of the map.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Arulbabu Donbosco
      There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. 
      I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
      anyone, please help me .. how to go further... to create an application like VR CAVE. 
       
      Thanks
    • By cebugdev
      hi all,

      i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
      i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
      now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
      1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
      2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
      3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
      lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
      Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
      IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
      thank you, and looking forward to positive replies.
    • By fllwr0491
      I have a few beginner questions about tesselation that I really have no clue.
      The opengl wiki doesn't seem to talk anything about the details.
       
      What is the relationship between TCS layout out and TES layout in?
      How does the tesselator know how control points are organized?
          e.g. If TES input requests triangles, but TCS can output N vertices.
             What happens in this case?
      In this article,
      http://www.informit.com/articles/article.aspx?p=2120983
      the isoline example TCS out=4, but TES in=isoline.
      And gl_TessCoord is only a single one.
      So which ones are the control points?
      How are tesselator building primitives?
    • By Orella
      I've been developing a 2D Engine using SFML + ImGui.
      Here you can see an image
      The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
      Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. 
      I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 
      3D Editor preview
      But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
      If you can provide code will be better. And if you want me to provide any specific code tell me.
      Thanks!
    • By Picpenguin
      Hi
      I'm new to learning OpenGL and still learning C. I'm using SDL2, glew, OpenGL 3.3, linmath and stb_image.
      I started following through learnopengl.com and got through it until I had to load models. The problem is, it uses Assimp for loading models. Assimp is C++ and uses things I don't want in my program (boost for example) and C support doesn't seem that good.
      Things like glVertexAttribPointer and shaders are still confusing to me, but I have to start somewhere right?
      I can't seem to find any good loading/rendering tutorials or source code that is simple to use and easy to understand.
      I have tried this for over a week by myself, searching for solutions but so far no luck. With tinyobjloader-c and project that uses it, FantasyGolfSimulator, I was able to actually load the model with plain color (always the same color no matter what I do) on screen and move it around, but cannot figure out how to use textures or use its multiple textures with it.
      I don't ask much: I just want to load models with textures in them, maybe have lights affect them (directional spotlight etc). Also, some models have multiple parts and multiple textures in them, how can I handle those?
      Are there solutions anywhere?
      Thank you for your time. Sorry if this is a bit confusing, English isn't my native language
  • Popular Now