Jump to content
  • Advertisement
Sign in to follow this  
Somnia

Lots of questions about animated sprites

This topic is 3565 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

The project I'm working on involves putting sprites in a 3D environment, I'm sort of trying to recreate the first couple of total war games. (Mainly becuase I'd like to have a crack at the AI side of things eventually, but in the meantime I'm going very slowly through the basics.) I'm using OpenGL and have my textured bill-boarded quads in the game, and have written a first attempt at animation cycles. However I have few questions about the best way to put it all together, particularly in terms of things like the data structures for the animations. For example, for a game using isometric type sprites you have lots and lots of frames, since you have 5 orientations for every position. Clearly you don't want to have a single texture per frame and but what is the best way to distribute your frame among image files? I understand that in general when rendering you want to minimize state changes, and I think binding a texture counts here. I think texture sizes should also be a power of two. So is there some sort of optimal texture size for best results? I've got the idea from somewhere that textures prefer to be square but I might have just made that up. You also want to simplify the work flow for your artists as well I suppose, so you can't really have frames distributed arbitrarily all over the place. One thing you could do is have one animation cycle per image, and have the frames in a regular grid with the 5 orientations on the vertical axis. The only problem I can see is that different frames needn't be the same size, an attack animation in particular could easily be bigger in the x direction when viewed horizontally. You could still keep the regular grid by using the largest frame to fix the size but then you'd end up with more empty space. Alternatively you could have one cycle and orientation per image. Instead of a regular grid you could describe the texture coordinates of each frame individually, although you'd want to make a tool for creating your meta-data file since that could be a bit laborious. Also is it standard practice to have separate textures for things like weapon and shields? I would have thought this could be more trouble than it's worth if each unit can only actually use one anyway. Although in practice your animators probably put the weapons in separate layers when making the sprites so it's not that big a leap. I can't quite figure out if the rendering order is important or whether it's a natural consequence of creating them in that way that the weapon sprites are transparent in the right places. So for each unit I need some sort of meta-data file that goes through all the textures associated with the unit. Then for each image it lists the animation cycles in it, and then for each cycle it list the frames, for each frame you need quad size, position offset, texture coordinates, weapon/shield positions and frame delay. Is my thinking along the right lines here? Also I still haven't got round to learning XML, which I think is popular for this kind of data, does it offer many advantages over just defining a text file format? Also, any thoughts on TGA vs PNG? Lastly there are some games that allow the user to change a sprites colors. I was looking at the sprites in Baldur's Gate 2 (best 2D game art ever IMO) and I see the player avatars use very unnatural colors. So there is some sort of indexing scheme going on but I'm not sure how this is done. Are the different color schemes each made by hand as it were or is there something more procedural going on? (To explain what I mean, the user chooses a single skin color, but every skin pixel in the sprite is not exactly that color, there's at least some light and dark variation.) Any pointers appreciated. Sorry for the long post. (I thought sticking with 2D would keep things simple!)

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by Somnia
The project I'm working on involves putting sprites in a 3D environment


Sprites are kind of inherently 2D. Are you planning to turn the sprites to face the camera? Restrict camera movement to a plane, so that you have drawn sprites on a 3D-rendered-but-2D-gameplay background? Something else?

Try making a few screen mockups. You might gain insight yourself; you'll certainly help others figure out how to help you achieve the effects you want.

(Also, consider that 3D tools like OpenGL are often used these days for entirely 2D games, with an orthographic camera projection, simply because hardware acceleration more than compensates for the useless calculation overhead. Graphics cards are scarily powerful beasts now.)

Quote:
For example, for a game using isometric type sprites you have lots and lots of frames, since you have 5 orientations for every position. Clearly you don't want to have a single texture per frame and but what is the best way to distribute your frame among image files? I understand that in general when rendering you want to minimize state changes, and I think binding a texture counts here. I think texture sizes should also be a power of two. So is there some sort of optimal texture size for best results? I've got the idea from somewhere that textures prefer to be square but I might have just made that up.


Texture sizes are normally a power of two in each dimension, and making them square used to be a good idea for compatibility... not sure that it makes a difference any more, but it can't really hurt, and is unlikely to make things much more difficult for you.

The basic idea is to put related sprites - e.g., different orientations of the same character/object - on the same texture. You can draw them at whatever sizes, and orient them however you like on your "spritesheet", and then rely on texture mapping and all the other 3D math stuff to sort things out. However you'll want to be careful how you do the layout: if you're going to map onto quads, then you'll need to allocate rectangular areas of the texture for each sprite, and not allow them to overlap (unless perhaps you only have transparent pixels in the overlap region) - since otherwise, you'll get garbage bits of other sprites showing up when you draw.

Arranging things to minimize state changes is kind of tricky... getting it really optimal would require you to sit down and plan, thinking about the order in which things will be drawn. And, you know, things change. You'll add new enemy types, change movement rules, etc. And chances are you can afford huge textures that hold many sprites, anyway, which means you won't have that many texture changes anyway. In short, make things easy for the artists first, for you second, and the computer last. If things run slow, then you can try to figure out why.

Quote:
The only problem I can see is that different frames needn't be the same size, an attack animation in particular could easily be bigger in the x direction when viewed horizontally. You could still keep the regular grid by using the largest frame to fix the size but then you'd end up with more empty space. Alternatively you could have one cycle and orientation per image. Instead of a regular grid you could describe the texture coordinates of each frame individually, although you'd want to make a tool for creating your meta-data file since that could be a bit laborious.


This is what often gets done. It sounds like you've actually thought this through pretty well. :)

FYI, Photoshop can be scripted with javascript (anywhere), VB (on PCs) or Applescript (on Macs). You can, for example, put each sprite on its own layer, and use a script to measure and record the layer bounds, and flatten and save the texture image.

Quote:
Also is it standard practice to have separate textures for things like weapon and shields? I would have thought this could be more trouble than it's worth if each unit can only actually use one anyway. Although in practice your animators probably put the weapons in separate layers when making the sprites so it's not that big a leap.


Separate sprites, certainly. They might go on the same texture. Maybe there will be a separate texture with all the game weapons on it. But chances are you'll change your mind about "each unit can only actually use one anyway". ;)

Quote:
I can't quite figure out if the rendering order is important or whether it's a natural consequence of creating them in that way that the weapon sprites are transparent in the right places.


You need to draw the character before the weapon, almost certainly. Things drawn later appear on top of things drawn earlier. :) But the weapon sprite is also likely to require transparency.

Quote:
So for each unit I need some sort of meta-data file that goes through all the textures associated with the unit. Then for each image it lists the animation cycles in it, and then for each cycle it list the frames, for each frame you need quad size, position offset, texture coordinates, weapon/shield positions and frame delay. Is my thinking along the right lines here?


Typically: You store the texture ID (i.e., which texture is the sprite on?), texture coordinates and center of gravity (texture coordinates imply the quad size; "position offset" is ambiguous) of each sprite, and perhaps some "attachment points" (what you refer to as weapon/shield positions). Frame delay is something that should likely either be constant, or handled directly in the code in some other way. You'll want to give a "name" to each sprite, so that you can refer to it in the code. Basically, what you're setting up is an association (mapping) from a name (typically a string) to a sprite-data (some kind of structured data).

Quote:
Also I still haven't got round to learning XML, which I think is popular for this kind of data, does it offer many advantages over just defining a text file format?


It offers advantages, but you can probably gain most of those by other means. Pythonistas (Python programmers) typically just write a file that looks like a huge Python expression that describes all the data they need; then they can open the file and eval() it, and they're done. JSON is a mainstream competitor to XML, and it offers comparable advantages for javascript programmers. (not that you're likely to have javascript involved in the actual game code for a game using OpenGL. :) )

For example, a data file used by Python might look like


{
'soldier-left': {
'texture': 'soldiers.png', 'coords': (0, 0, 32, 48),
'center': (16, 40),
# near the bottom of the sprite, where the soldier stands... offset a little
# to account for the shadow that's drawn on the sprite
'attachment-points': { 'helmet': (16, 16), 'gun': (4, 24), 'boots': (16, 32) }
},
'soldier-right': {
# etc. etc. etc.
}
}



And then you write code to perform functionality such as


// I'm assuming C++ for no reason at all :)

// Draw the sprite identified by the given name, such that its center of
// gravity aligns with the given coordinate in the given viewport.
void blit(Viewport& v, const std::string& spritename, int x, int y);

// Given a parent sprite, figure out where the named attachment point would be,
// if the parent's center is drawn at the given coordinates. Update the
// coordinates accordingly.
void adjust_coordinates(const std::string& parentname, const std::string& attachment_point, int& x, int& y);



Quote:
Also, any thoughts on TGA vs PNG?


PNG is compressed. It will likely take a little longer to load your game (once textures are loaded, they're all the same), in return for saving disk space. This is a pretty trivial issue, though. There is, in properly done code, approximately one place where it matters (i.e. the function where you load the texture, and it's easy to just write both versions - especially since you can get a library to do the heavy lifting), and it takes basically no effort to re-save from one format to the other. (Also, PNG's compression is lossless, meaning you will not degrade the image by converting back and forth. Don't use JPG!!!)

Quote:
Lastly there are some games that allow the user to change a sprites colors. I was looking at the sprites in Baldur's Gate 2 (best 2D game art ever IMO) and I see the player avatars use very unnatural colors. So there is some sort of indexing scheme going on


Not necessarily. The colour of a sprite could also be modified by a shader. One common approach is to draw the sprite with high brightness but low contrast (so almost greyscale, tending towards the lighter shades of grey), and then multiply each pixel's colour with a (possibly user-selected) colour, using a shader.

Another way is to modify the underlying image data (you can leave it in memory in a buffer after you read the file), and then re-load the texture (assuming the library will accept the buffer as input, instead of a file name or file stream). This is probably easiest and fastest with 256-colour PNGs (although of course these have the disadvantage of using only 256 different colours :) ), since they have a "colour table" section that you can modify. (You can't modify a PNG file per-pixel except by loading the image and working on the bitmap, because of the compression. It should work on a TGA, though.) Then the change to a given colour in the "palette" applies to every pixel that had that colour (unless, of course, the colour was duplicated in the palette).

Share this post


Link to post
Share on other sites
I can only speak from experience here, as far as I know there really isn't a 'best' method of handling this, but I can describe how I dealt with this in a past work that confronted me with a number of the issues you are dealing with now.

First off, it is good that you are thinking of your artists that have to actually work with these formats. Creating sprite sheets is hardly an artists job, in the sense that dealing with layout issues is both tedious and confining in many respects, along with prone to alignment errors. The solution I came to was to have each figure be its own picture initially, with some sort of known coordinate system. For example, each 'cell' in my sprites consisted of a single image that was 512x512 pixels in size, with an understanding that pixel 256,256 is the 'center' of the image, representing the spot on the ground that the artist working on things could consider where the figure would stand. This size was far larger than the actual sprites used, but meant that artists could draw sprites of large sizes without feeling constrained by borders of the image. I then had a stand alone tool that shaved off all the unused space and sewed all the images together into a set of large images that were very compact, without any empty space, while also spitting out the metadata that you are referring to that bound coordinates on the sprite sheet to intended cells.

Layering is useful to support as well for the reason that you hinted at. Making things like swords/shields/[clothing..jewelry....special effects... and many more, and even other strange things that I will get to later] into layers is pretty useful. For many of these layers, draw order isn't all that important, but draw order does become very relevant when you consider things like weapon/armor overlays, specifically getting things like swords to be properly occluded against bulky clothing [instead of drawing the sword graphic with a little slice in it where it is intended that the character would occlude it]. Again, working with your artist and with a specific tool can be very beneficial in this respect, as many image editing programs support layers in their native formats which greatly simplify the authoring process.

Layers can also be helpful with respect to dealing with sprite coloring. The reason for this is that you can seperate regions that you want to be colored differently into different layers, and actually draw these pieces in gray scale. You would store these sections in gray scale, and add the color in later during run time with just a straight multiplication with the color that you are adding [very easy to do, and if you are doing these cells with individual draw calls, even easier as you can just specify a vertex color that would be mixed with the gray]. You would then just be dealing with drawing each layer with a different color, and everything will come out properly mixed. Just split out which layers belong to what component, and it becomes pretty easy to properly stain the correct components of an image. For example, if you want a green person with a blue shirt with a decal on it, you draw the gray person with a green vertex color on the quad, and then the gray shirt with a blue vertex color on the quad, then the decal with no stain. This again is something that your stand-alone tool can greatly facilitate optimizing, as you can deal with sewing together images and presenting a more concise view of more complex images.

I used shaders to help with this, and sewed layer colors into the textures that I was using. The result was being able to stain only desired parts of the image, and to work with different colors at the same time by specifying a per-pixel index into a color pallete. It adds only a small amount of complexity [learning glsl, which is really easy if you know exactly what you want to do with it, and restrict your learning to meeting just that goal], but simplifies things greatly in terms of the process of drawing things.

For file format, you will likely find that the data you need to maintain doesn't nicely fit into most existing formats unless you are going for a straight single-layer approach. If you use something pre-existing, it is a good idea to use something that natively has an alpha channel [both TGA and PNG fit the bill here if i recall correctly]. Personally I rolled my own since it became very clear to me early on that layering and some layer-associated metadata would be very important to me [and the native layer-supporting formats such as gimp's format or .psd were really complicated]. I used gimp to churn out individual images, and then converted those into my own format that involved sewing together all the layers/annotating coloring layers/slicing the images down to size.

Just view almost everything you need to do to these images in terms of passes in a tool that converts what your artists are willing to generate into what you are willing to use in-game. This tool doesn't need to be super efficient, and confronting 'hard' problems in this way isn't a big deal, since you will be only passing each sprite through your tool once.

And sticking with 2D *IS* keeping things simple..... a lot simpler than 3D, though many of these same techniques can be easily carried over into 3D [such as mesh coloring]. The bottom line though is that most authoring tools more heavily emphasis drawing static figures rather than sprites, so you have to take the burden of bridging the gap yourself, or use an actual sprite authoring program [which are both hard to find and comparably primitive in terms of their usability and value].

Share this post


Link to post
Share on other sites
Cheers for the replies Drigovas and Zahlman, lots of stuff to chew over. Thanks for the references to JSON, Python and GLSL in particular I'll certainly look into those.

Quote:


Sprites are kind of inherently 2D. Are you planning to turn the sprites to face the camera?



Yes. The games I'm trying to emulate are Shogun and Medieval Total War. They had 3D terrain, which I think back in those days was done with splines, and used sprites for the troops with what I think is cylindrical bill-boarding. That is the quads move around on the terrain surface as any 3D object would, always point directly upward and rotate around the vertical axis to face the camera. (It could be spherical rather than cylindrical actually, it's a bit hard to tell, I've implemented cylindrical in my code and it seems to look all right.) The camera movement has to be constrained to maintain the illusion obviously. You can only zoom in so far and the camera pitch can't be too vertical. It might be intentional that you can't strafe the camera either, since this makes it awkward to pan around units and see them popping into the different orientations.

I suppose this is a bit of strange thing to want to emulate. Those games didn't get many plaudits for the visuals, the illusion isn't all that convincing although obviously it's much worse when you know what to look for. (In the same way I never used to be bothered by playing sprite games where my characters weapons strangely change hands depending on whether they're facing left or right.) But I still have a lot of affection for those games and I'm in any case more interested in the game logic, but want to get some kind of content in my game.

Regarding the quad size and position offset: I think the former is necessary since the quads are 3D, unless the program has a constant relationship between number of pixels and "physical" size. I believe the total war games had two levels of detail for their sprites, which would be one reason why that wouldn't be true. With position offset I was thinking that you might need something to tell you how to position the quad on the height map. In the case of humanoid units the bottom center of the quad can just go at the troop's physical position, unless the frame has empty space at the bottom. For cavalry and anything else with significant extension in the horizontal plane things might be more complicated, I'm not sure how to orientate them when on a slope for example. A bill-boarded quad can also be rotated in the plane of the viewport which might help, I don't know.

Quote:


I used shaders to help with this, and sewed layer colors into the textures that I was using. The result was being able to stain only desired parts of the image, and to work with different colors at the same time by specifying a per-pixel index into a color pallete. It adds only a small amount of complexity [learning glsl, which is really easy if you know exactly what you want to do with it, and restrict your learning to meeting just that goal], but simplifies things greatly in terms of the process of drawing things.



I think I follow this. So you export the different layers into separate images from your authoring program. Then run your tool to wrap the layers into a single custom file, in which some of the layers will be greyscale or almost greyscale and marked with type information. The you load the different layers separately, then combine into one texture in which the alpha values of the layers tell you how to construct the per-pixel index. This is passed along with the palette into the shader.



Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!