Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Display mode advice needed


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
8 replies to this topic

#1 LeDerpish   Members   -  Reputation: 109

Like
0Likes
Like

Posted 22 March 2012 - 09:14 AM

I am currently working on a 2D top-view game in Java and need som advices regarding display modes.
What is the best and most proffesional way to handle them? Should I use a fixed display mode like 800x600 or should I have a drop down menu to choose between all avaible DMs? And also (if the second option), what should happen when the mode is changed? I tried to make the images fit to a 1920x1080 screen and put them in a buffered image which I resized to the current display mode, but everything looked so compressed when I ran it on my 1440x900 screen... Posted Image

Sponsor:

#2 Bacterius   Crossbones+   -  Reputation: 9262

Like
0Likes
Like

Posted 22 March 2012 - 03:32 PM

Obviously you should query the display driver to know all the resolutions it supports, and gracefully scale your game to the proper dimensions and aspect ratio. Nothing infuriates me more than finding a game that stops at 1280x720 for some reason because the developers couldn't be bothered asking the display driver what the possible resolutions were and just hardcoded some in.

And when the mode is changed you need to make sure your graphics code is flexible when it comes to resolution changes, so you can just change two variables (width and height) and update your graphics states. Normally game logic should not change, since you're normalizing coordinates, right? (between (-1, -1) and (+1, +1) regardless of the resolution).

EDIT: unless you're writing a really retro type game which would benefit from a low resolution (for the "retro" pixelated feel) - but then again running 800x600 on a 1920x1080 monitor is an eyesore.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#3 Krohm   Crossbones+   -  Reputation: 3245

Like
0Likes
Like

Posted 23 March 2012 - 02:49 AM

I start at the current resolution. As I have no AAA game to run, I typically have plenty of hardware power to fill a screen.

#4 LeDerpish   Members   -  Reputation: 109

Like
0Likes
Like

Posted 06 April 2012 - 11:25 AM

And when the mode is changed you need to make sure your graphics code is flexible when it comes to resolution changes, so you can just change two variables (width and height) and update your graphics states. Normally game logic should not change, since you're normalizing coordinates, right? (between (-1, -1) and (+1, +1) regardless of the resolution).

Any idea on how to do this? Cuz as I said, I tried to do this by drawing everything to a buffered image which I then resized, but I get problems with mouse inputs then...

#5 Bacterius   Crossbones+   -  Reputation: 9262

Like
0Likes
Like

Posted 06 April 2012 - 05:39 PM

Any idea on how to do this? Cuz as I said, I tried to do this by drawing everything to a buffered image which I then resized, but I get problems with mouse inputs then...

Na you pretty much recreate all of your graphics state related to output size (i.e. backbuffer, depth buffer, etc...) to accomodate the new resolution. If you're doing a 2D game you should normally get away with simply resizing your output image to the correct resolution - assuming you've coded the rest of your program to be resolution-agnostic (i.e. working in normalized coordinates) it will work flawlessly.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#6 bglanzer   Members   -  Reputation: 459

Like
0Likes
Like

Posted 07 April 2012 - 05:27 AM

First have a default mode saved in a file. You still need to query the graphics device to make sure that the mode is available. Then allow the option for the player to change the mode in an options menu. When you queried the graphics device you can save all of the available modes and allow the player to select one. Then save that in your settings file and use it for next time.

Brendon Glanzer


#7 LeDerpish   Members   -  Reputation: 109

Like
0Likes
Like

Posted 07 April 2012 - 10:35 AM

assuming you've coded the rest of your program to be resolution-agnostic (i.e. working in normalized coordinates) it will work flawlessly.

But how do I do that? :o

#8 Bacterius   Crossbones+   -  Reputation: 9262

Like
1Likes
Like

Posted 07 April 2012 - 11:57 AM

But how do I do that?

You just divide all the coordinates you use by the current resolution's dimensions. This will give you coordinates between 0 and 1. Then you can multiply by two and subtract one if you want a range between -1 and 1 (more convenient). By doing this you basically abstract resolution from your game. Then you simply store resolution in one place and use it to convert back from your normalized coordinates back to actual screen pixels, which should only happen at the very end of your rendering pipeline (i.e. display to the screen). In fact 3D API's such as DirectX or OpenGL do it automatically for you (but you still gotta give them the resolution you want at the start, of course).

The idea is that instead of dealing with screen pixel coordinates everywhere and having to change them all the time to accomodate different resolutions, you just provide normalized coordinates instead which will naturally scale with the resolution of the screen you're working with. The actual conversions are done "behind the scenes" automatically which simplifies stuff a lot.

For instance if you need to draw a square between (200, 200) and (400, 400) on a 800x600 resolution, then you just use the coordinates:

(200 / 800, 200 / 600) to (400 / 800, 400 / 600) => (0.25, 0.3333) to (0.5, 0.6666)

Now assume you run your modified game on a 1600x1200 screen. You don't need to update the coordinates at all! Indeed, they can be derived from the normalized coordinates:

(0.25 * 1600, 0.3333 * 1200) to (0.5 * 1600, 0.6666 * 1200) => (400, 400) to (800, 800)

(this example assumes coordinates between 0 and 1, in general it's often more useful to have them between -1 and 1 but the concept is the same).

This works regardless of the screen's resolution (obviously you'll get stretching if you use a different aspect ratio, but you can account for this by correcting your coordinates for aspect ratio, i.e. basically multiplying the X coordinate by Width/Height, which will cause you to see more of the screen on the sides).

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#9 LeDerpish   Members   -  Reputation: 109

Like
0Likes
Like

Posted 07 April 2012 - 02:01 PM

Thank you very much! Will try to implement this now. :P




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS