Resolution scaling and World / Screen Coords

Started by
9 comments, last by Norman Barrows 8 years, 8 months ago

So when it comes to different resolutions, world coordinates, screen coordinates, and everything around these topics... I got no idea what I'm doing for the most part. So if there is something that does not make sense or you have questions please feel free to ask smile.png

Here is the situation:
Let's say I have a 640 x 360 resolution. I make a entire 2D game based around that resolution. And I mean everything
graphics, object hit boxes, boundaries, and etc.

Now lets say someone else comes along and wants to play it on there system. And for example sake, they have a system
with a 1280 x 720 display. Thats 2 times the size of my game's native resolution. So my first thought would be to scale
the graphics through my projection matrix and then... Well after this point I'm lost sad.png

What do I do about logical things such as hit boxes or boundaries?
Since my native game resolution is 640 x 360, that means 1 position on the screen is 1 position in the world [assume my (0,0) is the top left corner]. If I scale the graphics by 2, I believe some of my things such as hit boxes and boundaries also scale by 2. But my starting positions for objects are divided by 2?

What is the best / proper way to consider the bigger (or smaller) resolution? How should I be doing this? This is all in respect to mobile devices (Android) if that matters

Advertisement
You just have to render everything1:2
That doesnt mean that you have to change anything about the dimensions of the objects or hitboxes or positions. You can keep the same logic. You only have to scale input (eg mouse positions from screen space to world space) and output (= what you render)


What is the best / proper way to consider the bigger (or smaller) resolution? How should I be doing this?

If you are OK with a blurry scaled image, you can just set the backbuffer resolution of your application to 640x480, which will fit it to the screen automatically (you don't show your usesers any resolution settings when the logical resolution is fixed anyways). Alternatively, you can always just render your scene to an offscreen texture, and copy it to the backbuffer in a final pass. The main advantage of having the backbuffer in the logical resolution of your game is that screenshots & screen recordings will already have the correct size.

I'd also give the option of eigther a windowed mode where your window size is equal to the logical resolution of the game, and alternatively also a mode for fullscreen where you render the game in real resolution in the center of the screen with nothing around it.

The question keep popping up it seems like every other day or so. Putting a little thought into it. Why would you want to hard-code resolution or have any hard-resolution related dependency in your design if your intention is to support multiple resolution. Without giving specifics, it would make sense to define all units in a common frame of reference/coordinate system which can be scaled to any resolution. For ex..having a units defined in a normalized coordinate system ( 0.0-1.0 ) would help terms of consistency, however, its still does not obviate the need to transform to the actual device coordinates/resolution, but it would make your life easier.

The question keep popping up it seems like every other day or so. Putting a little thought into it. Why would you want to hard-code resolution or have any hard-resolution related dependency in your design if your intention is to support multiple resolution. Without giving specifics, it would make sense to define all units in a common frame of reference/coordinate system which can be scaled to any resolution. For ex..having a units defined in a normalized coordinate system ( 0.0-1.0 ) would help terms of consistency, however, its still does not obviate the need to transform to the actual device coordinates/resolution, but it would make your life easier.

I don't want to be tied down to some resolution. I want to be able to support multiple displays, its just I don't know how to actually do it?

Using my first example:
My graphics are designed for a 640 x 360 resolution. Someone wants to play on their system that has a 1280 x 720 display.

My first thought would be to calculate my scale factor, which in this case is 2. Then design some function to translate Screen Coords to World Coords.
EG:


 
//Usage of below
Vector2 worldCoords = ScreenToWorldCoords(100.0f, 200.0f); 
Player player = new Player(playerImg, worldCoords);

/*Other Example*/
if(player.position.x >= ActualDeviceScreenWidth)
      player.position = ScreenToWorld(50.0f, player.position.y);


//================================

public Vector2 ScreenToWorld(float screenX, float screenY)
{
      Vector2 vec = new Vector2();
      vec.x = screenX * (TARGET_SCREEN_RES_X / (TARGET_SCREEN_RES_X * scaleFactorX)); // 100 * (640 / (640 * 2))
      vec.y = screenY * (TARGET_SCREEN_RES_Y / (TARGET_SCREEN_RES_Y * scaleFactorY)); // 200 * (360 / (360 * 2))
 
      return vec;
}

I mean is this how I should do it? This is the first though off the top of my head

You dont have to change the size of your objects (physical properties)

It is ok to simply define e.g. 1 unit in your gameworld to be 2 pixels on the screen instead of 1 (your games native resolution).

If a user wants to play it on 960 x 540, for example, you would define one unit in your gameworld to be rendered as "1.5 pixels"

But that does not mean that you have to rescale every object in your gameworld-representation.

(btw what library are you using?)

You dont have to change the size of your objects (physical properties)
It is ok to simply define e.g. 1 unit in your gameworld to be 2 pixels on the screen instead of 1 (your games native resolution).


This I don't really understand. I can see this for graphics working, but not logic. Something is not clicking for me here

Sticking with the same example:
Let's say I have a player. And at the native resolution (640 x 360) he is 32 x 32 pixels big. The hit box for him is also 32 x 32 pixels. Then we have the bigger resolution come into the picture (1280 x 720. Double the native). When the player is rendered he would be 64 x 64 pixels big.

Now here is the part where I get lost. Wouldn't his hit box also have to be resized to 64 x 64 to match his new render size? In order to keep the game same across the 2 different resolutions?

(btw what library are you using?)

Custom Lib. But regardless of the lib, I would have still this issue simply because I don't know how to solve the problem

Do logic in whatever size you feel like. Then draw it at whatever size you (or the user, depending on what you allow) feel like.

Scale input to your logic, instead of trying to scale your logic based on screen size.

If your logical window is 640 x 360, and the render resolution is at 1280 x 720, just treat input at coordinate 500 x 250 as if it were at 250x 125.

The exact process will depend a bit on what kind of stuff you're using -- some frameworks operate in normalized values from -1 to 1 across the entire screen, others might only give you access to the pixels and the window size. Regardless, you don't really care what size the window is, you care what size your world and logic is.

Hello to all my stalkers.


Sticking with the same example:
Let's say I have a player. And at the native resolution (640 x 360) he is 32 x 32 pixels big. The hit box for him is also 32 x 32 pixels. Then we have the bigger resolution come into the picture (1280 x 720. Double the native). When the player is rendered he would be 64 x 64 pixels big.

Now here is the part where I get lost. Wouldn't his hit box also have to be resized to 64 x 64 to match his new render size? In order to keep the game same across the 2 different resolutions?

Think of the bigger screen as some sort of a magnifying glass. Just because everything appears to be twice is big that doesnt mean that it actually is twice as big.

Do logic in whatever size you feel like. Then draw it at whatever size you (or the user, depending on what you allow) feel like.
Scale input to your logic, instead of trying to scale your logic based on screen size.

If your logical window is 640 x 360, and the render resolution is at 1280 x 720, just treat input at coordinate 500 x 250 as if it were at 250x 125.
The exact process will depend a bit on what kind of stuff you're using -- some frameworks operate in normalized values from -1 to 1 across the entire screen, others might only give you access to the pixels and the window size. Regardless, you don't really care what size the window is, you care what size your world and logic is.

Think of the bigger screen as some sort of a magnifying glass. Just because everything appears to be twice is big that doesnt mean that it actually is twice as big.


OMG, I finally get it now. It shouldn't matter if the graphics get scaled because my game would still be running at the original logical resolution. The only thing I would need to do, like you guys said, is to scale down my screen coords for the input to match my logical resolution. Its like a logical mini map or like a magnifying glass like you said

I'm sure I'll have more questions later, but regardless this is a big help! Thanks

This topic is closed to new replies.

Advertisement