You would use a 16-bit value so you have more detail in future math.
If you were adjusting the values the extra information helps reduce or eliminate banding.
For example, lets say you started with all 256 values in an 8-bit range. Then you ran it through some processing algorithm, perhaps a color curve filter. Since the results get rounded back to integers, you might (as an example) only have 163 values after going through the filter. The filter caused you to lose about 1/3 of the values.
Now lets say you started with all 65535 values in your 16-bit range. Then you run it through the same filter. When you are done, you get 41727 values in the range. Again the filter caused you to lose about 1/3 of the values.
When you render the first image, those 163 values are sent to the screen. Since a smooth grayscale would have 256 values rather than 163, there will be many bands in the image. When you render the second image, those 41727 values get reduced down to 256 values that can be displayed, so all 256 values are smoothly shown on screen.
Many algorithms cause this kind of issues. Lighting, fragment shading, gamma correction, and many other routines can cause that type of quantization.
As you posted in the Mobile forum, you should know few very mobile games use png for their image formats. While the format is great for the web and some other uses, it isn't that great for games since it requires decompression (extra memory and processing power) and can't be used directly by the graphics chip. For your final game assets you should probably be using ETC or PVRTC formats depending on if you are targeting Android or iOS.