Fair enough, it might be somewhat far fetched to account for porting to 16-bit systems, but can go the other way as well, where a type is suddenly larger than expected. Take things like random number generators, noise maps and/or hashing algorithms where you do multiplications and/or bit shifts that usually overflow but all of a suddenly don't.
Also, it can be easy to make stupid mistakes. This is back to the sudden 16-bit issue, but you get the idea (yes, really shitty code, I know):
#define MAP_WIDTH 256
#define MAP_HEIGHT 256
#define MAP_COORD(x,y) ((y)*(MAP_WIDTH)+(x))
tile_t g_map[MAP_WIDTH*MAP_HEIGHT];
// somewhere..
static_assert(MAP_HEIGHT <= INT_MAX && MAP_WIDTH <= INT_MAX, "Map dimensions too big!");
for(int y = 0; y < MAP_HEIGHT; y++)
{
for(int x = 0; x < MAP_WIDTH; x++)
{
render_or_whatever(x, y, g_map[MAP_COORD(x,y)]); // int math that breaks for 16 bit ints but works fine for 32 bits
}
}
Actually, we've started to see a trend where some game companies ship physical objects that interact with a PC or phone game in some way, ie by placing some figure on a specific surface to access a character in the game. Those physical objects may have some microcontroller in them, likely 8 or 16 bit, where you could want to share some small subset of your code between that firmare and your PC application/game. It's not super common right now, but with VR and AR we might very well see an increase in these types of things where it may very well matter.