Jump to content
  • Advertisement
Sign in to follow this  
addy914

OpenGL Spritesheets vs Individual Sprites

This topic is 2171 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello, I am aware that there is a topic that was very similar to this, but it is really old, so I thought I would start a new one to get new ideas/opinions.
The question that I am asking is what is better to do, use spritesheets or individual sprites, or what is a perfect balance between the two?

I am currently working on a game, and I want to make sure that it is effecient. I actually split up all the characters/npcs/etc, and then halfway through I realized how costly it is to do that. I am using DirectX9, and I believe even with OpenGL, using SetTexture is very costly. The pros of having individual tiles in their own image file is that you won't use as much memory because you only have loaded what you needed. It is very ineffecient because SetTexture is costly and it loses a lot of performance.

Using one big tileset is also a bad idea because some graphics cards have a texture maximum size and if you're tileset is big enough(which is very possible) then you're application will not be compatible with certain people's computers. It also uses a lot of memory to do this because you have all the tiles in memory but you are probably only using 5-10% of the actual loaded textures. A pro of this is that it will be effecient performance-wise because you only have to call SetTexture once and you just clip what you need.

I believe that to be the most effecient, I will need to create a perfect balance of the two. I am thinking that I want to make each texture a 512x512 size and once it gets to that size, then I will create a second texture to continue on. Most graphics cards will be able to support a 512x512 texture, it gives you a performance boost because the spritesheets will be split up by category, so I will be calling SetTexture once, render category, and move on. It is a bit of a memory waste, but not as much as the one big tileset would be. I would much prefer to have my application render faster than worry about memory as much. Most computers can handle the memory, and as long as it isn't ridiculous, then it will be fine.

What are your guys' thoughts on this? This is meant to be a conceptual topic, but it seems like it is leaning towards my problem. I just want to hear what you guys think and maybe that will help influence me to reach a conclusion about my problem. And so you guys know, tilesheet = spritesheet, tiles = sprites. I use them interchangeably for some reason.

Share this post


Link to post
Share on other sites
Advertisement
I'm working on a 3D space game that, for the enemy spaceships, uses sprites (pre-rendered from every angle) instead of models. All spaceships have 544 angles (32 angles of yaw, 17 of pitch), each individual sprite is 256x256. We used to just load in 544 single 256x256 texture files in the beginning - and on decent hardware that would take at the very least 30 seconds to load. When you have 10+ different spaceship classes in a mission, that made for absolutely unacceptable loading times. I then wrote a tool that would read in the 544 sprites and output 17 spritesheets for each ship (ie. one sheet for each pitch rotation), each spritesheet containing 32 256x256 sprites (all the yaws in the pitch of the spritesheet) - so 2048x1024. This reduced loading times down to 2-3 seconds per ship.

In a nutshell: how effective spritesheets are over individual sprites depends on your game, my game is probably one of the strongest cases you'll find, if you're talking about 16 8x8 sprites, however, it probably won't make that much sense for example. How large you can make your spritesheets depends on the hardware you're targeting, you mention 512x512, that's extremely conservative. If you're not aiming at supporting really old GPUs you can certainly go up to 4096x4096, I'd be willing to bet that all GPUs that came out like 6 years or earlier maybe(?) will support this texture size. And then there's even more to it, if you look at memory consumption, you need to consider using DXTn texture compression. This will also speed up loading times in case you're converting each loaded texture from their PNG or JPEG format to RGB(A) at the moment. Last but not least, make sure to use a vertex shader for selecting the portion of the spritesheet you actually want. Doing this on the CPU is a really bad idea these days, for obvious reasons.

Here's an example of a vertex shader that does this (HLSL):
[source lang="cpp"]
float4x4 mat_worldviewproj;
float4 rect;

struct vs_input
{
float4 position : POSITION;
float2 uv : TEXCOORD0;
};

struct vs_output
{
float4 position : POSITION;
float2 uv : TEXCOORD0;
};

vs_output MyVertexShader ( vs_input input )
{
vs_output output;

// transform vertex into screen space for rendering
output.position = mul ( float4 ( input.position.xyz, 1.0f ), mat_worldviewproj );

// uv atlas (u/v/w/h of sprite in sheet, all range 0..1)
output.uv.x = ( input.uv.x * rect.z ) + rect.x;
output.uv.y = ( input.uv.y * rect.w ) + rect.y;

return output;
}
[/source]

Just feed it the vector4 "rect" with the top left corner coordinates as the first two elements and then the width and height of the portion you want from the spritesheet (all in range 0..1 - this way the size of your texture is irrelevant to the shader).

Gone on a bit of a tangent there, hope this helps you in some way! Edited by d k h

Share this post


Link to post
Share on other sites
There is one "extended" version of using sprite sheets, and that is using of 3D textures. On programming side you have one texture call, with defining one more texture coordinate beside u and v.

Probably this will be the fastes way.

Share this post


Link to post
Share on other sites
The bigger gain you're going to get from using a spritesheet (you'll also find them referred to as a texture atlas) is not so much from reduced SetTexture calls but more from reduced Draw(Indexed)Primitive calls - if state doesn't need to be changed then you're able to get much more efficient draw call batching which D3D9 really needs.

That doesn't mean that reducing the number of SetTexture calls is something you can ignore - it still has benefit - but just that the primary advantage is going to be elsewhere.

If you're using ID3DXSprite you'll find that it automatically does the required draw call batching for you; if not you'll need to do a little extra work yourself (but it's not that difficult).

Regarding texture size, with SM2.0 hardware you can generally rely on at least 2048x2048 textures being available, although SM2.0 doesn't specify a texture size (I've seen it go down to 1024x1024 but never lower). If you're coding to SM3.0 you're guaranteed at least 4096x4096 - source: http://msdn.microsoft.com/en-us/library/bb219845%28v=VS.85%29.aspx

Share this post


Link to post
Share on other sites
Thanks d k h, that story is really comforting to know. It tells me that spritesheets are the way to go for sure. I am wanting to support older graphic cards, so I am trying to pick a relatively low texture size that a lot of graphic cards will support. From what mhagain says, I should be safe with 1024x1024, which sounds good.

Even if I were to use 4096x4096 or whatever texture size, I am wanting to answer the question "What if I exceed that size with my textures?" I am trying to think on a bigger scale(which in the game I'm making, I will certainly exceed 1024x1024 a few times). I could make another textures to hold this data but whenever I am in the rendering layer, I would most likely end up switching textures from the first page to the second page. As I slowly add on more and more "pages" of textures, the more I have to switch textures between them.

Any ideas?

Share this post


Link to post
Share on other sites

Thanks d k h, that story is really comforting to know. It tells me that spritesheets are the way to go for sure. I am wanting to support older graphic cards, so I am trying to pick a relatively low texture size that a lot of graphic cards will support. From what mhagain says, I should be safe with 1024x1024, which sounds good.

Even if I were to use 4096x4096 or whatever texture size, I am wanting to answer the question "What if I exceed that size with my textures?" I am trying to think on a bigger scale(which in the game I'm making, I will certainly exceed 1024x1024 a few times). I could make another textures to hold this data but whenever I am in the rendering layer, I would most likely end up switching textures from the first page to the second page. As I slowly add on more and more "pages" of textures, the more I have to switch textures between them.

Any ideas?


As I stated before 3D textures are way to go, if you are concerned about switching textures.

If you feel any problem with 3D textures (yes they may not work on really old hardware) then your only way to go is to determine maximum texture size available on computer through caps, then dynamically create bigger texture sheet.

Say you have 4 textures with 1024x1024 size and maximum available texture size on computer is 4096x4096, then you during loading process putt all 4 textures on one bigger. Of course you will need to have special class for handling UV of dynamically created textures but that is another story.

But as mhagain stated probably you will not have too much problems with texture switching, biggest bottleneck you will probably have is a number of draw calls.

Just try to make 4000 BeginSprite/DrawSprite/EndSprite calls then make same with 1 BeginSprite, 4000 DrawSprite and 1 EndSprite call,l and you will understand the problem.

Share this post


Link to post
Share on other sites

[quote name='addy914' timestamp='1341939612' post='4957660']
Thanks d k h, that story is really comforting to know. It tells me that spritesheets are the way to go for sure. I am wanting to support older graphic cards, so I am trying to pick a relatively low texture size that a lot of graphic cards will support. From what mhagain says, I should be safe with 1024x1024, which sounds good.

Even if I were to use 4096x4096 or whatever texture size, I am wanting to answer the question "What if I exceed that size with my textures?" I am trying to think on a bigger scale(which in the game I'm making, I will certainly exceed 1024x1024 a few times). I could make another textures to hold this data but whenever I am in the rendering layer, I would most likely end up switching textures from the first page to the second page. As I slowly add on more and more "pages" of textures, the more I have to switch textures between them.

Any ideas?


As I stated before 3D textures are way to go, if you are concerned about switching textures.

If you feel any problem with 3D textures (yes they may not work on really old hardware) then your only way to go is to determine maximum texture size available on computer through caps, then dynamically create bigger texture sheet.

Say you have 4 textures with 1024x1024 size and maximum available texture size on computer is 4096x4096, then you during loading process putt all 4 textures on one bigger. Of course you will need to have special class for handling UV of dynamically created textures but that is another story.

But as mhagain stated probably you will not have too much problems with texture switching, biggest bottleneck you will probably have is a number of draw calls.

Just try to make 4000 BeginSprite/DrawSprite/EndSprite calls then make same with 1 BeginSprite, 4000 DrawSprite and 1 EndSprite call,l and you will understand the problem.
[/quote]

I believe I am already using 3D textures? I am doing this in C# using SharpDX, and I only use the Direct3D9 class, so I assume it's a 3D texture. I also have a function to call BeginScene and one to call EndScene. So I call BeginScene, draw everything that is needed on the screen, and then call EndScene & Present to display it. The number of draw calls is inevitable because it doesn't matter if I use a spritesheet or not, I will have to call the draw function the same amount of times.(approximately, not enough to notice)

I think I will make another test program and get some data seeing how slow SetTexture is.

"I could make another textures to hold this data but whenever I am in the rendering layer, I would most likely end up switching textures from the first page to the second page. As I slowly add on more and more "pages" of textures, the more I have to switch textures between them."
I am still not sure what to do about this issue.

I actually didn't think of making a bunch of 1024x1024 spritesheets, and then combining them to the graphic cards max texture size. That will make it more efficient. I can also spit out an error message if they don't have at least a 1024x1024 max texture size.

Share this post


Link to post
Share on other sites
I am not familiar with SharpDX, but you are probably not using 3D textures out of box. 3D textures are made like bunch of 2d texture "slices", so if you have for example 10 pcs of 1024x1024 textures, you can make 1 3D textures with 10 layers of 1024x1024 textures. Then you make 1 SetTexture call and use 3 coordinates for mapping, instead of u and v coordinate for "ordinary" 2D textures.

Share this post


Link to post
Share on other sites
Oh okay, that makes sense, then no I do not have that. I will have to learn how to do that. Is there a limit of how big the 3D texture can be? The width/height maximum texture size will be the same as a 2D texture, but what about depth-wise?

Share this post


Link to post
Share on other sites
I don't think there is any restriction about it (not 100% sure), probably the memory is the limit.

However if I remember well, I have read somewhere that 3th dimension on texture also should be power of 2. So it is wise to have 1,2,4,8,16 etc layers.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Similar Content

    • By QQemka
      Hello. So far i got decently looking 3d scene. I also managed to render a truetype font, on my way to implementing gui (windows, buttons and textboxes). There are several issues i am facing, would love to hear your feedback.
      1) I render text using atlas with VBO containing x/y/u/v of every digit in the atlas (calculated basing on x/y/z/width/height/xoffset/yoffset/xadvance data in binary .fnt format file, screenshot 1). I generated a Comic Sans MS with 32 size and times new roman with size 12 (screenshot 2 and 3). The first issue is the font looks horrible when rescaling. I guess it is because i am using fixed -1 to 1 screen space coords. This is where ortho matrix should be used, right?
      2) Rendering GUI. Situation is similar to above. I guess the widgets should NOT scale when scaling window, am i right? So what am i looking for is saying "this should be always in the middle, 200x200 size no matter the display window xy", and "this should stick to the bottom left corner". Is ortho matrix the cure for all such problems?
      3) The game is 3D but i have to go 2D to render static gui elements over the scene - and i want to do it properly! At the moment i am using matrix 3x3 for 2d transformations and vec3 for all kinds of coordinates. In shaders tho i technically still IS 3D. I have to set all 4 x y z w of the gl_Position while it would be much much more conventient to... just do the maths in 2d space. Can i achieve it somehow?
      4) Text again. I am kind of confused what is the reason of artifacts in Times New Roman font displaying (screenshot 1). I render from left to right, letter after letter. You can clearly see that letters on the right (so the ones rendered after ones on the left are covered by the previous one). I was toying around with blending options but no luck. I do not support kerning at the moment but that's definitely not the cause of error. The display of the small font looks dirty aliased too. I am knd of confused how to interpret the integer data and how should be scaled/adapted to the screen view. Is it just store the data as constant size and again - use ortho matrix?
      Thanks in advance for all your ideas and suggestions!
      https://i.imgur.com/4rd1VC3.png
      https://i.imgur.com/uHrSXfe.png
      https://i.imgur.com/xRTffPn.png
    • By plz717
      Hello, everyone! I hope my problem isn't too 'beginnerish'. I'm doing research on motion synthesis now, trying to implement the Deep Mimic paper (DeepMimic) by BINPENG XUE, in this paper, I need to first retarget character A's motion to another character B to make the reference motion clips for character B, since we don't have character B‘s reference motion. The most important thing is that in the paper, the author copied character A's joint's rotation with respective to joint's local coordinate system (not the parent) to character B. In my personal understanding, the joint's rotation with respective to joint's local coordinate system is something like that in the attached photo, where for the Elbow joint, i need to get the Elbow's rotation in the elbow's local coordinate system (i'm very grateful for you to share your ideas if i have misunderstanding about it 🙂)
      I have searched many materials on the internet about how to extract the local joint's information from FBX, the most relative one i found is the pivot rotation( and geometric transformation, object offset transformation). I'm a beginner in computer graphics, and i'm confused about whether the pivot rotation( or geometric transformation, object offset transformation) is exactly the joint's local rotation i'm seeking? I hope someone that have any ideas can help me, I'd be very grateful for any pointers in the right direction. Thanks in advance! 

    • By nOoNEE
      hello guys , i have some questions  what does glLinkProgram  and  glBindAttribLocation do?  i searched but there wasnt any good resource 
    • By owenjr
      Hi, I'm a Multimedia Engineering student. I am about to finish my dergree and I'm already thinking about what topic to cover in my final college project.
      I'm interested in the procedural animation with c++ and OpenGL of creatures, something like a spider for example. Can someone tell me what are the issues I should investigate to carry it out? I understand that it has some dependence on artificial intelligence but I do not know to what extent. Can someone help me to find information about it? Thank you very much.
       
      Examples: 
      - Procedural multi-legged walking animation
      - Procedural Locomotion of Multi-Legged Characters in Dynamic Environments
    • By Lewa
      So, i'm still on my quest to unterstanding the intricacies of HDR and implementing this into my engine. Currently i'm at the step to implementing tonemapping. I stumbled upon this blogposts:
      http://filmicworlds.com/blog/filmic-tonemapping-operators/
      http://frictionalgames.blogspot.com/2012/09/tech-feature-hdr-lightning.html
      and tried to implement some of those mentioned tonemapping methods into my postprocessing shader.
      The issue is that none of them creates the same results as shown in the blogpost which definitely has to do with the initial range in which the values are stored in the HDR buffer. For simplicity sake i store the values between 0 and 1 in the HDR buffer (ambient light is 0.3, directional light is 0.7)
      This is the tonemapping code:
      vec3 Uncharted2Tonemap(vec3 x) { float A = 0.15; float B = 0.50; float C = 0.10; float D = 0.20; float E = 0.02; float F = 0.30; return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F; } This is without the uncharted tonemapping:
      This is with the uncharted tonemapping:
      Which makes the image a lot darker.
      The shader code looks like this:
      void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb; color = Uncharted2Tonemap(color); //gamma correction (use only if not done in tonemapping code) color = gammaCorrection(color); outputF = vec4(color,1.0f); } Now, from my understanding is that tonemapping should bring the range down from HDR to 0-1.
      But the output of the tonemapping function heavily depends on the initial range of the values in the HDR buffer. (You can't expect to set the sun intensity the first time to 10 and the second time to 1000 and excpect the same result if you feed that into the tonemapper.) So i suppose that this also depends on the exposure which i have to implement?
      To check this i plotted the tonemapping curve:
      You can see that the curve goes only up to around to a value of 0.21 (while being fed a value of 1) and then basically flattens out. (which would explain why the image got darker.)
       
      My guestion is: In what range should the values in the HDR buffer be which then get tonemapped? Do i have to bring them down to a range of 0-1 by multiplying with the exposure?
      For example, if i increase the values of the light by 10 (directional light would be 7 and ambient light 3) then i would need to divide HDR values by 10 in order to get a value range of 0-1 which then could be fed into the tonemapping curve. Is that correct?
    • By nOoNEE
      i am reading this book : link
      in the OpenGL Rendering Pipeline section there is a picture like this: link
      but the question is this i dont really understand why it is necessary to turn pixel data in to fragment and then fragment into pixel could please give me a source or a clear Explanation that why it is necessary ? thank you so mu
       
       
    • By Inbar_xz
      I'm using the OPENGL with eclipse+JOGL.
      My goal is to create movement of the camera and the player.
      I create main class, which create some box in 3D and hold 
      an object of PlayerAxis.
      I create PlayerAxis class which hold the axis of the player.
      If we want to move the camera, then in the main class I call to 
      the func "cameraMove"(from PlayerAxis) and it update the player axis.
      That's work good.
      The problem start if I move the camera on 2 axis, 
      for example if I move with the camera right(that's on the y axis)
      and then down(on the x axis) -
      in some point the move front is not to the front anymore..
      In order to move to the front, I do
      player.playerMoving(0, 0, 1);
      And I learn that in order to keep the front move, 
      I need to convert (0, 0, 1) to the player axis, and then add this.
      I think I dont do the convert right.. 
      I will be glad for help!

      Here is part of my PlayerAxis class:
       
      //player coordinate float x[] = new float[3]; float y[] = new float[3]; float z[] = new float[3]; public PlayerAxis(float move_step, float angle_move) { x[0] = 1; y[1] = 1; z[2] = -1; step = move_step; angle = angle_move; setTransMatrix(); } public void cameraMoving(float angle_step, String axis) { float[] new_x = x; float[] new_y = y; float[] new_z = z; float alfa = angle_step * angle; switch(axis) { case "x": new_z = addVectors(multScalar(z, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(z, SIN(alfa))); break; case "y": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(z, SIN(alfa))); new_z = subVectors(multScalar(z, COS(alfa)), multScalar(x, SIN(alfa))); break; case "z": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(x, SIN(alfa))); } x = new_x; y = new_y; z = new_z; normalization(); } public void playerMoving(float x_move, float y_move, float z_move) { float[] move = new float[3]; move[0] = x_move; move[1] = y_move; move[2] = z_move; setTransMatrix(); float[] trans_move = transVector(move); position[0] = position[0] + step*trans_move[0]; position[1] = position[1] + step*trans_move[1]; position[2] = position[2] + step*trans_move[2]; } public void setTransMatrix() { for (int i = 0; i < 3; i++) { coordiTrans[0][i] = x[i]; coordiTrans[1][i] = y[i]; coordiTrans[2][i] = z[i]; } } public float[] transVector(float[] v) { return multiplyMatrixInVector(coordiTrans, v); }  
      and in the main class i have this:
       
      public void keyPressed(KeyEvent e) { if (e.getKeyCode()== KeyEvent.VK_ESCAPE) { System.exit(0); //player move } else if (e.getKeyCode()== KeyEvent.VK_W) { //front //moveAmount[2] += -0.1f; player.playerMoving(0, 0, 1); } else if (e.getKeyCode()== KeyEvent.VK_S) { //back //moveAmount[2] += 0.1f; player.playerMoving(0, 0, -1); } else if (e.getKeyCode()== KeyEvent.VK_A) { //left //moveAmount[0] += -0.1f; player.playerMoving(-1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_D) { //right //moveAmount[0] += 0.1f; player.playerMoving(1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_E) { //moveAmount[0] += 0.1f; player.playerMoving(0, 1, 0); } else if (e.getKeyCode()== KeyEvent.VK_Q) { //moveAmount[0] += 0.1f; player.playerMoving(0, -1, 0); //camera move } else if (e.getKeyCode()== KeyEvent.VK_I) { //up player.cameraMoving(1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_K) { //down player.cameraMoving(-1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_L) { //right player.cameraMoving(-1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_J) { //left player.cameraMoving(1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_O) { //right round player.cameraMoving(-1, "z"); } else if (e.getKeyCode()== KeyEvent.VK_U) { //left round player.cameraMoving(1, "z"); } }  
      finallt found it.... i confused with the transformation matrix row and col. thanks anyway!
    • By Lewa
      So, i'm currently trying to implement an SSAO shader from THIS tutorial and i'm running into a few issues here.
      Now, this SSAO method requires view space positions and normals. I'm storing the normals in my deferred renderer in world-space so i had to do a conversion and reconstruct the position from the depth buffer.
      And something there goes horribly wrong (which has probably to do with worldspace to viewspace transformations).
      (here is the full shader source code if someone wants to take a look at it)
      Now, i suspect that the normals are the culprit.
      vec3 normal = ((uNormalViewMatrix*vec4(normalize(texture2D(sNormals, vTexcoord).rgb),1.0)).xyz); "sNormals" is a 2D texture which stores the normals in world space in a RGB FP16 buffer.
      Now i can't use the camera viewspace matrix to transform the normals into viewspace as the cameras position isn't set at (0,0,0), thus skewing the result.
      So what i did is to create a new viewmatrix specifically for this normal without the position at vec3(0,0,0);
      //"camera" is the camera which was used for rendering the normal buffer renderer.setUniform4m(ressources->shaderSSAO->getUniform("uNormalViewMatrix"), glmExt::createViewMatrix(glm::vec3(0,0,0),camera.getForward(),camera.getUp())//parameters are (position,forwardVector,upVector) ); Though i have the feeling this is the wrong approach. Is this right or is there a better/correct way of transforming a world space normal into viewspace?
    • By HawkDeath
      Hi,
      I'm trying mix two textures using own shader system, but I have a problem (I think) with uniforms.
      Code: https://github.com/HawkDeath/shader/tree/test
      To debug I use RenderDocs, but I did not receive good results. In the first attachment is my result, in the second attachment is what should be.
      PS. I base on this tutorial https://learnopengl.com/Getting-started/Textures.


    • By norman784
      I'm having issues loading textures, as I'm clueless on how to handle / load images maybe I missing something, but the past few days I just google a lot to try to find a solution. Well theres two issues I think, one I'm using Kotlin Native (EAP) and OpenGL wrapper / STB image, so I'm not quite sure wheres the issue, if someone with more experience could give me some hints on how to solve this issue?
      The code is here, if I'm not mistaken the workflow is pretty straight forward, stbi_load returns the pixels of the image (as char array or byte array) and you need to pass those pixels directly to glTexImage2D, so a I'm missing something here it seems.
      Regards
  • Advertisement
  • Popular Now

  • Forum Statistics

    • Total Topics
      631396
    • Total Posts
      2999784
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!