Why is the first spritesheet laid out like this? Seems harder to read the sprite by a sprite reader system

Started by
20 comments, last by Nicholas Kong 10 years ago

I'm not sure if this was the actual spritesheet from the NES title, Metroid. But if it is why are all the sprites from the first sprite sheet laid out this way? Would it not be harder for a spritesheet reader to read through this spritesheet because the sprites you want to read are not linearly laid out like the second spritesheet is?

metroid_zps0a136337.gif

metroidsnes_zps90477a4b.gif

Would the best spritesheet be the one like the second sprite sheet where everything is tiled correctly(same width and height for each sprite) and laid out in a linear fashion?

Advertisement

I'm not sure it really makes much difference. The regions are just rectangles either way. I could see a tiny advantage in the second approach where you don't have to store each rectangle separately (since you can compute them all from known dimensions)... but that doesn't help much if you only have memory for so many pixels and you need different-sized sprites to fit onto a single bitmap.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

NES games didn't have a spritesheet, there's no image stored from where to read the sprites. Sprites (or tiles) where either 8x8, 8x16 or 16x16 pixels, and those sprites are combined (if one sprite is not enough) to draw the character, or enemy or whatever is not the background. For a door for example, there's probably one sprite for the square, one for the rounded corner and one for the door center, and those are used to draw the full door.

Also, the sprites are not the final images, only some information is stored, and the final color you see is computed using the "pattern table". In the door example, there's only one square, and the final color is read or blue when needed.

This is how sprite data is stored and used: http://hackipedia.org/Platform/Nintendo/NES/tutorial,%20NES%20programming%20101/NES101.html#sprite

For SNES games this is similar, old games where too hardware dependant and the hardware had lots of limitations and "hacks", so things like drawing, moving, collision detection and almost everything else was more complex than today.

The second one arent even rects, its insane, as each sprite would need to be a different mesh.

I have my own texture tool where I can pack a bunch of sprites into a single atlas, it saves each 'frame' uv rect to a file. So the order they show up are random.

Note that for animated sprites theres a problem with trimming transparent spaces, as you need to keep the frames from a single animation relative, so if in a frame the character is with arms opened and in the other closed, the frame would bounce as the position of the character wouldnt change, but the frames have different sizes. (generally ppl let transparent areas untouched, with no trimming. In my engine each animation frame have an offset to compensate, so my texture tool do the trimming and computes this offset needed to keep the animation correct, based on the amount removed from each side).


'frame' uv rect to a file.

What is a uv?

Also, what do you mean by atlas? Do you mean a map?

'frame' uv rect to a file.

What is a uv?

Also, what do you mean by atlas? Do you mean a map?

UVs are the texture coordinates, from 0.0 (top left) to 1.1 (bottom right). Is how you get a portion of the texture.

An uv is specified for each vertex of the mesh you want to draw. Is how you map a texture to a mesh.

In the case of a sprite, you have a quad (or 2 triangles), means 4 points, so you have 4 points and 4 uv coords.

A texture atlas is a texture inception ;D. When you take a lots of texture and merge into a single texture (like a tileset or sprite sheet). This is good for performance as changing texture is expensive, so if you manage to have all images possible into a single texture, you will never remove the texture from the GPU memory, win \o/.

The second one is not a (usable) spritesheet. It's probably just frames of the character ripped out from the game and stored in a single texture.

Note the first row, frames 2 and 5: clearly of different widths. This is not a big problem, as you can just store each individual frames width in a descriptor file.

Note the last row, how frames 2-4 overlap each other, so you cannot take a AABB rect of just one frame without having something leftover from the other frame. This is what gives it away as not being an actual spritesheet for use in a game.

devstropo.blogspot.com - Random stuff about my gamedev hobby

What is a uv?


Also, what do you mean by atlas? Do you mean a map?
UVs are the texture coordinates, from 0.0 (top left) to 1.1 (bottom right). Is how you get a portion of the texture.
An uv is specified for each vertex of the mesh you want to draw. Is how you map a texture to a mesh.
In the case of a sprite, you have a quad (or 2 triangles), means 4 points, so you have 4 points and 4 uv coords.

Are the uv coordinates necessarily required for 2D game programming or is it mainly used for 3D game programming? Because when I programmed my 2D rpg I never needed to worry about uv coordinates. So uv coordinates are specified to save memory?

UVs are necessary for "sampling" (grabbing texture pixels on shaders) textures on both direct3D and openGL.

Theres no way around it, its not to save memory, its for accessing the texture pixels (more correctly, texels, as pixels are on the screen).

So if the software/lib you used where using one of those apis, they are handling this for you.

Note that theres no "2D" concept on those apis, GPUs are rasterizers that read vertex data to shaders, shaders will work on that data and output to render targets (textures displayed on the screen).

The "3D" or "2D" is all on the api users responsibility, its all about transforming points with matrices. Those points are rasterized to the screen accordingly with the topology specified on the api (lines, triangles lists, etc.).

So, if theyr necessary for 2D game programming depends on what level* you are, if you using those apis directly you certainly need it. If not, dont worry, its being handle for you.

(*level as in low-level / high-level, not as in beginner / advanced )

--

Now, to draw things without those apis these days, I dont know if its even possible..Even windows GDI uses direcX underlying. So I dont know...Im curious, how Flash applications are drawn?? o.o anyone knows? Where are the vectors translated to pixels?

-edit-

http://help.adobe.com/en_US/as3/mobile/WS08cf58784027fd117735d34612d0a8759d1-8000.html

Crazy, flash is pretty damn fast considering the rendering madness it have to go trough

The second one is not a (usable) spritesheet. It's probably just frames of the character ripped out from the game and stored in a single texture.

Note the first row, frames 2 and 5: clearly of different widths. This is not a big problem, as you can just store each individual frames width in a descriptor file.

Note the last row, how frames 2-4 overlap each other, so you cannot take a AABB rect of just one frame without having something leftover from the other frame. This is what gives it away as not being an actual spritesheet for use in a game.

Thank you. I've seen sheets like this before (the 2nd one) and have always wondered how games even pull the sprites from there.

Beginner in Game Development?  Read here. And read here.

 

This topic is closed to new replies.

Advertisement