Followers 0

Why is the first spritesheet laid out like this? Seems harder to read the sprite by a sprite reader system

21 posts in this topic

I'm not sure if this was the actual spritesheet from the NES title, Metroid. But if it is why are all the sprites from the first sprite sheet laid out this way? Would it not be harder for a spritesheet reader to read through this spritesheet because the sprites you want to read are not linearly laid out like the second spritesheet is?

[url=http://s33.photobucket.com/user/warnexus/media/metroid_zps0a136337.gif.html][/URL]

[url=http://s33.photobucket.com/user/warnexus/media/metroidsnes_zps90477a4b.gif.html][/URL]

Would the best spritesheet be the one like the second sprite sheet where everything is tiled correctly(same width and height for each sprite) and laid out in a linear fashion?

Edited by warnexus
0

Share on other sites

I'm not sure it really makes much difference. The regions are just rectangles either way. I could see a tiny advantage in the second approach where you don't have to store each rectangle separately (since you can compute them all from known dimensions)... but that doesn't help much if you only have memory for so many pixels and you need different-sized sprites to fit onto a single bitmap.

0

Share on other sites

'frame' uv rect to a file.

What is a uv?

Also, what do you mean by atlas? Do you mean a map?

Edited by warnexus
0

Share on other sites

What is a uv?

Also, what do you mean by atlas? Do you mean a map?
UVs are the texture coordinates, from 0.0 (top left) to 1.1 (bottom right). Is how you get a portion of the texture.
An uv is specified for each vertex of the mesh you want to draw. Is how you map a texture to a mesh.
In the case of a sprite, you have a quad (or 2 triangles), means 4 points, so you have 4 points and 4 uv coords.

Are the uv coordinates necessarily required for 2D game programming or is it mainly used for 3D game programming? Because when I programmed my 2D rpg I never needed to worry about uv coordinates. So uv coordinates are specified to save memory?

0

Share on other sites

UVs are necessary for "sampling" (grabbing texture pixels on shaders) textures on both direct3D and openGL.

Theres no way around it, its not to save memory, its for accessing the texture pixels (more correctly, texels, as pixels are on the screen).

So if the software/lib you used where using one of those apis,  they are handling this for you.

Note that theres no "2D" concept on those apis, GPUs are rasterizers that read vertex data to shaders, shaders will work on that data and output to render targets (textures displayed on the screen).

The "3D" or "2D" is all on the api users responsibility, its all about transforming points with matrices. Those points are rasterized to the screen accordingly with the topology specified on the api (lines, triangles lists, etc.).

So, if theyr necessary for 2D game programming depends on what level* you are, if you using those apis directly you certainly need it. If not, dont worry, its being handle for you.

(*level as in low-level / high-level, not as in beginner / advanced )

--

Now, to draw things without those apis these days, I dont know if its even possible..Even windows GDI uses direcX underlying. So I dont know...Im curious, how Flash applications are drawn?? o.o anyone knows? Where are the vectors translated to pixels?

-edit-

Crazy, flash is pretty damn fast considering the rendering madness it have to go trough

Edited by Icebone1000
1

Share on other sites

The second one is not a (usable) spritesheet. It's probably just frames of the character ripped out from the game and stored in a single texture.

Note the first row, frames 2 and 5: clearly of different widths. This is not a big problem, as you can just store each individual frames width in a descriptor file.

Note the last row, how frames 2-4 overlap each other, so you cannot take a AABB rect of just one frame without having something leftover from the other frame. This is what gives it away as not being an actual spritesheet for use in a game.

Thank you. I've seen sheets like this before (the 2nd one) and have always wondered how games even pull the sprites from there.

0

Share on other sites

As Strewya mentioned the bottom one has no purpose since almost all the images overlap. Whoever made that seems to enjoy wasting time making things no one can use.

As ApochPiQ mentioned it makes no difference since they are just texture coordinates. Why does a computer care if they are laid out horizontally? Humans care about that, not computers.
However the second one (if it was valid) is laid out randomly (as apposed to systematically as in the first one). A computer does actually care about numbers, and numbers that round off to powers of 2 are healthy for a computer, so the first one would be better. All the coordinates are divisible by 8, 16, 32, etc. Floating-point numbers are not always precise in computing so you have to care about the UV coordinates in any texture atlas.

DiegoSLTS is incorrect in saying that texture atlases did not exist in old days. Since the dawn of time texture swaps were expensive so texture-atlasing was one of the earliest and most primitive optimizations there was. Yes, they only read 8×8 blocks from them, so drawing Mario took 2 calls, but the calls read from the same texture. Mario’s head would only be in 1 8×8 block, then a bunch of different lower 8×8 body parts. It would all still be part of the same texture atlas.

L. Spiro

Edited by L. Spiro
1

Share on other sites

It should be mentioned that "sprite" nowadays does not necessarily mean the old school image rectangle. Advanced sprite techniques exist that allow for lighting effects, for example. There is also a technique that allows for animation by mesh morphing. The game made by using this is one about zoo animals; I currently do not remember its name. This technique is actually based on irregular meshes and hence would principally allow for an "overlapping" packing like in the 2nd atlas. However, that technique also requires a higher texel resolution than those shown in the OP to look good. (In other words, the 2nd atlas is probably not an example of this technique.)

Using irregular bounding boxes is also not exotic these days. Texture packers usually export a file besides the texture, wherein the clips are stored by name/index and texel co-ordinates. However, the usual texture packers still deal with AABB only, even if they support bin packing.

Please stop calling them “sprite sheets”. It’s an insult to the industry.

Well, this is interesting. In the world of perhaps not exactly professional 2D game makers there are many examples of tools and libraries that actually use the term "sprite sheet". There are for example Texture Packer, Zwoptex, cocos2d, and perhaps most noticeable Flash (at least in CS6). So the term is effectively introduced in the pre-industrial phase.

2

Share on other sites

I must work with a bunch of monkeys as I've heard sprite sheet and texture atlas used interchangeably for my entire career.

The Samus sheet might be an TexturePacker output image.  As I recall it can trim transparency from source images and give you an offset to place the trimmed image inside the full sized frame.

1

Share on other sites

I've actually heard the terms "sprite sheet" and "texture atlas" used to refer to different things. A "texture atlas" is simply a collection of textures packaged into a single image. A "sprite sheet" is a texture atlas which is specifically used to store sprite assets for a game object (like a character) and has the additional restrictions that each of the sub-images are the same size and are laid out in some regular fashion to improve the efficiency of animation frame lookup. For what it's worth, I didn't hear the term "texture atlas" until my first industry job, where they were used for packaging custom GUI element textures together. I've never heard "texture atlas" used in the context of sprite animation.

2

Share on other sites
can i call it sprite atlas
0

Share on other sites

The second image is impossible to use. Ok, maybe not impossible, but a pain to do so. All the frames overlap each other, and it is not just a matter of copying a rectangle.

If you tried to copy the rectangle for the second frame of the "running animation" it would also render the foot from the first frame and the gun from the third one. So you'd need to create a mechanism that ignored these parts at runtime or erased them by actually removing the overlaps when loaded or something like that.

About the first image, it does have arbitrary positioning, but that can be easy to solve simply by storing the information you'll need, in this case the offset and size of each frame, for example:

PlayerRunning		#The Animation Name
3			#The total number of frames
(306, 234)(350, 278)	#The first frame, start and end coords of the rectangle
(394, 234)(430, 290)	#Second Frame
(354, 233)(385, 291)	#Third Frame

1

Share on other sites

NES games didn't have a spritesheet, there's no image stored from where to read the sprites.

Quoted for Emphasis.

Basically everything before the PS1/Saturn/N64 had no concept of an image or texture -- just some bits that fit into a handful of fixed-size buckets. When you see sprite-rips of these old games, that's just the people making them transforming them into a familair format for modern uses. Color values and color depth you see in these files are similarly the modern abstraction of how the original hardware would have understood those bits together with the values written to the hardware color palette. Early PCs, Amigas, Atari STs and the like worked something like a reasonable facsimile of how modern games would operate, but earlier computers and game consoles up until about '95 worked very differently.

2

Share on other sites
On wow hardware color palettes used in the past. That must have been a nightmare!

Really appreciate you demystify the origin of past games for me.
0

Create an account

Register a new account