"Standard" Resolution Performance

Started by
19 comments, last by swiftcoder 10 years, 8 months ago

One of my professors has made the claim that using a "standard" resolution (i.e. 1920x1080, 1024x768, etc.) will provide better performance than using "non-standard" resolution (1500x755, etc.) I've never heard of this before and can't seem to find anything to back his claims. I know a lot of games on consoles render to lower (and "non-standard") resolutions and then upscale for better performance, which is the opposite of what he's stated. I don't know what he really means by "resolution" and he hasn't clarified. (The display resolution? The resolution of any framebuffer that is being rendered to?) He didn't say that this was specifically for any platform, but I was in an XNA on Windows class at the time.

I've tried benchmarking it myself (basic 2D sprite rendering via XNA, fullscreen with different backbuffer resolutions) and didn't see any performance penalties/gains that were out of the ordinary. Asking him about it just got the response of "well, it only happens for more advanced graphics algorithms." Has anyone else heard anything like this?

Advertisement

I don't think there's anything special about the 'standard' resolutions. There's a ton of standard resolutions, given all monitor sizes and ratios.

As far as I know the graphics pipeline doesn't optimize anywhere based on what resolution you're rendering at.

There's two things sort of related that I know can sometimes make a difference - using textures that have are square with a size that's a power of 2 - which was only a big deal on older graphics cards.

The other thing is that running a game in fullscreen (not the fake borderless window style, but actual fullscreen) gives a small performance boost, since there are optimizations done about rendering via the OS.

It won't make a difference to performance unless you're on some platform that can only output video signals in some restricted set of resolutions.

e.g. a hypothetical current gen console might be built to always output at 720p or 1080p, so if you internally use a lower resolution, then you have to explicitly pay the cost of resizing your framebuffer to 720p. However, these systems might also provide hardware support to assist in this operation... Nonetheless, you will burn a fraction of a millisecond performing the scaling.

If we take what he's saying less literally, and replace "performance" with "latency", he might be slightly more true, at least when displaying on a television.

Some displays will be optimized specifically for their native resolutions only. If you send them a signal for a different resolution, they might be forced to internally re-scale the signal to match their native resolution. This operation, when done by a television, often takes one frame's worth of time -- i.e. so a 60Hz signal will have 16.6ms of latency added to it if rescaling is performed inside the TV.

Technically, there is no performance penalty, because this rescaling happens "for free" in parallel, inside the television... but your overall input latency is hugely affected.

Modern HDTV's (especially early or cheap ones) are often offenders in this category -- sometimes playing current gen console games that are 720p on a 1080p TV will result in an extra 16.6ms of latency, which is rediculous sad.png

However, there's no way to query this. Some TV's wont add any extra latency depending on the resolution. Some will add latency if you don't output a "standard" signal. Some will add latency if you don't output a specific signal (e.g. 1080p only). Some TV's will always add a huge amount of latency for no good reason!

The best rule of thumb I can think of here would be to prefer outputting at the monitors maximum supported resolution (which should be it's native resolution).

The best rule of thumb I can think of here would be to prefer outputting at the monitors maximum supported resolution (which should be it's native resolution).

Even that's not always perfect, though. I've run into quite a few LCD projectors, for instance, that "support" higher resolutions than they can actually display, and actually seem to downsample the input signal. Worse yet, the downsampling was very crude, but just "good" enough that it was obvious that it was supposed to be a "feature."

-~-The Cow of Darkness-~-

Asking him about it just got the response of "well, it only happens for more advanced graphics algorithms."

I'd ask him for an example of such "advanced graphics algorithms".

Frankly it sounds like he doesn't know what he's talking about, or accidentally said something he didn't mean and has low enough self-esteem that he can't bring himself to back out of it.

It's plausible that some algos would work best for resolutions exhibiting certain qualities, like being divisible by a certain number. And for a final render target that is a certain resolution, some lower resolutions will upscale to it more cleanly than others. But a resolution being "standard" has little to do with it.

Asking him about it just got the response of "well, it only happens for more advanced graphics algorithms."

I'd ask him for an example of such "advanced graphics algorithms".

Frankly it sounds like he doesn't know what he's talking about, or accidentally said something he didn't mean and has low enough self-esteem that he can't bring himself to back out of it.

Precisely. This sounds exactly like the kind of theoretical nonsense academics sometimes come out with. If it was the case then the API documentation would be shouting about it, the hardware vendors would be shouting about it, John Carmack would be tweeting about it, Johan Andersson would be blogging about it.

The fact that those who define pretty basic stuff such as how APIs and hardware actually work, the fact that those who have a track record of actually using this stuff in the field for real programs that real people use, are not doing so is evidence enough. It's nonsense.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

I remember a time when using a non-standard resolution required the CPU to scale the image to fit the monitor. But this was many years ago. If I'm not mistaken, most LCDs today have built-in hardware to scale the incoming image to the monitor's native resolution.

There are too many different resolutions available for this to be entirely true these days. Maybe in the past when there were only a few available resolutions this would have been sometimes the case.

These days there must be at least a dozen or even 2 dozen common screen sizes available for desktops and laptops, it seems to me that the best optimizations would be the one's which are resolution independent.

GPU manufacturers go out of their way to avoid this issue by abstracting away the notion of pixels. This is why the fragment processor is called the fragment processor.

The word 'fragment' is an abstraction of the word 'pixel'. It implies that the pixel information is unknown before the GPU is plugged into a specific machine.

Fragment is used in place of pixel because 'fragment' is resolution and screen size agnostic.

A console may possibly have hardware and driver optimizations for 720, and 1080 since these are common.

A desktop GPU will certainly make no assumptions about such things.

So far as game programmers go??? Holy merd! This would be like the days before GPU API's existed when people had to program for every possible graphics card.

2/3's of your software would be conditional switches and branches and code which is not even being used on the current machine.

There must be at least 100 distinct GPU's by now, with dozen's of sub-versions for many of them.

That sounds like a nightmare. I'd rather pluck out all my eyebrow hairs with bolt-cutters than worry about something like this. Then again, some people like to study what comes out of the rear-ends of animals. There is something for everyone.

Consider it pure joy, my brothers and sisters, whenever you face trials of many kinds, 3 because you know that the testing of your faith produces perseverance. 4 Let perseverance finish its work so that you may be mature and complete, not lacking anything.


If I'm not mistaken, most LCDs today have built-in hardware to scale the incoming image to the monitor's native resolution.

Most, but not all by any stretch (pun unintended).

I tend to buy those quad-def IPS monitors from S. Korea, and they explicitly do not included a scaler. Downside, it won't work as a TV, but on the upside you cut down the lag, and desktop GPU's tend to do pretty well at scaling anyway.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Got a lot more responses than I initially thought I would get; the whole thing seemed especially weird to me when another part of his reasoning for my tests not showing anything was that "In 2d (only GDI layer) the resolution is less dependent on hardware specs". I really doubted that XNA used GDI for 2D.

Thanks everyone for the info, the part about TV latency sounds like it might have been what he had been referring to and I'll keep that in mind for the future.

This topic is closed to new replies.

Advertisement