Display mode advice needed

Started by
7 comments, last by Ansou 12 years ago
I am currently working on a 2D top-view game in Java and need som advices regarding display modes.
What is the best and most proffesional way to handle them? Should I use a fixed display mode like 800x600 or should I have a drop down menu to choose between all avaible DMs? And also (if the second option), what should happen when the mode is changed? I tried to make the images fit to a 1920x1080 screen and put them in a buffered image which I resized to the current display mode, but everything looked so compressed when I ran it on my 1440x900 screen... ohmy.png
Advertisement
Lol 1920x1080 not 1920x1024 :P
Obviously you should query the display driver to know all the resolutions it supports, and gracefully scale your game to the proper dimensions and aspect ratio. Nothing infuriates me more than finding a game that stops at 1280x720 for some reason because the developers couldn't be bothered asking the display driver what the possible resolutions were and just hardcoded some in.

And when the mode is changed you need to make sure your graphics code is flexible when it comes to resolution changes, so you can just change two variables (width and height) and update your graphics states. Normally game logic should not change, since you're normalizing coordinates, right? (between (-1, -1) and (+1, +1) regardless of the resolution).

EDIT: unless you're writing a really retro type game which would benefit from a low resolution (for the "retro" pixelated feel) - but then again running 800x600 on a 1920x1080 monitor is an eyesore.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

I start at the current resolution. As I have no AAA game to run, I typically have plenty of hardware power to fill a screen.

Previously "Krohm"


And when the mode is changed you need to make sure your graphics code is flexible when it comes to resolution changes, so you can just change two variables (width and height) and update your graphics states. Normally game logic should not change, since you're normalizing coordinates, right? (between (-1, -1) and (+1, +1) regardless of the resolution).

Any idea on how to do this? Cuz as I said, I tried to do this by drawing everything to a buffered image which I then resized, but I get problems with mouse inputs then...
Any idea on how to do this? Cuz as I said, I tried to do this by drawing everything to a buffered image which I then resized, but I get problems with mouse inputs then...[/quote]
Na you pretty much recreate all of your graphics state related to output size (i.e. backbuffer, depth buffer, etc...) to accomodate the new resolution. If you're doing a 2D game you should normally get away with simply resizing your output image to the correct resolution - assuming you've coded the rest of your program to be resolution-agnostic (i.e. working in normalized coordinates) it will work flawlessly.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

First have a default mode saved in a file. You still need to query the graphics device to make sure that the mode is available. Then allow the option for the player to change the mode in an options menu. When you queried the graphics device you can save all of the available modes and allow the player to select one. Then save that in your settings file and use it for next time.

Brendon Glanzer


assuming you've coded the rest of your program to be resolution-agnostic (i.e. working in normalized coordinates) it will work flawlessly.

But how do I do that? :o
But how do I do that?[/quote]
You just divide all the coordinates you use by the current resolution's dimensions. This will give you coordinates between 0 and 1. Then you can multiply by two and subtract one if you want a range between -1 and 1 (more convenient). By doing this you basically abstract resolution from your game. Then you simply store resolution in one place and use it to convert back from your normalized coordinates back to actual screen pixels, which should only happen at the very end of your rendering pipeline (i.e. display to the screen). In fact 3D API's such as DirectX or OpenGL do it automatically for you (but you still gotta give them the resolution you want at the start, of course).

The idea is that instead of dealing with screen pixel coordinates everywhere and having to change them all the time to accomodate different resolutions, you just provide normalized coordinates instead which will naturally scale with the resolution of the screen you're working with. The actual conversions are done "behind the scenes" automatically which simplifies stuff a lot.

For instance if you need to draw a square between (200, 200) and (400, 400) on a 800x600 resolution, then you just use the coordinates:

(200 / 800, 200 / 600) to (400 / 800, 400 / 600) => (0.25, 0.3333) to (0.5, 0.6666)

Now assume you run your modified game on a 1600x1200 screen. You don't need to update the coordinates at all! Indeed, they can be derived from the normalized coordinates:

(0.25 * 1600, 0.3333 * 1200) to (0.5 * 1600, 0.6666 * 1200) => (400, 400) to (800, 800)

(this example assumes coordinates between 0 and 1, in general it's often more useful to have them between -1 and 1 but the concept is the same).

This works regardless of the screen's resolution (obviously you'll get stretching if you use a different aspect ratio, but you can account for this by correcting your coordinates for aspect ratio, i.e. basically multiplying the X coordinate by Width/Height, which will cause you to see more of the screen on the sides).

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

Thank you very much! Will try to implement this now. :P

This topic is closed to new replies.

Advertisement