Scaling on available memory

Started by
2 comments, last by Stainless 9 years, 10 months ago

Hi,

Do current games/engines scale on available memory?

On consoles the available memory is fixed, but on PC do games take advantage of variable amounts of memory or simply work with a fixed amount set by the dev and according to the settings chosen by the player?

For example, keeping more models/textures in memory to reduce streaming when the system has available memory.

There's also the problem of balancing performance, because even though there might be space for more particles (for example) the processing power must also be taken into account.

Advertisement

It is my understanding they do. This is the case especially for terrain related things such as in Frostbite. Perhaps ID megatexturing, in the case somebody still cares about them. Even games which are not streaming can still be more conservative before trashing old data.

There's of course the option of higher res assets.

My suggestion is to not mind those issues. It's a boatload of work and in the end, not required for market.

Previously "Krohm"

I've never personally shipped a PC game, so I don't really have first-hand experience with this. For our current title we have a streaming system where the majority of our textures get streamed in based on proximity and visibility, and then get placed into a pool of fixed-size memory. With our setup it's easy to know if we can fit a texture in the pool before streaming it in, so we can drop the higher-res mips if they won't fit. However this isn't really something you want to do in a shipping game, since it means you might unpredictably get quality loss on key textures depending on how they play through the game and what hardware they have. We really just do it so that we don't crash during development, so that we can fix the content for cases where we're over-budget. Obviously this is a lot easier to do when you're targeting a console with a fixed memory spec.

Based on what I've seen from playing games and reading reviews, I would suspect that most games don't bother trying to adaptively scale back textures and instead just rely on the user adjusting settings. In D3D you can over-commit memory, in which case the OS just starts paging your GPU memory in and out. So your performance starts to drop off a cliff, but you don't crash. Hence you get benchmarks where at certain settings a 2GB video card will get significant better performance than 1GB video card with the same GPU core.

A robust engine should have several memory pools.

A render data pool is very important, this is the memory that interacts with video ram and as such is very prone to fragmentation. All the render data pools I have worked with have defragmentation code attached.

After that it's a good idea to have data pools for things like audio and scripts as well as a system memory pool.

The sizes of all these should be setup when the app boots. On a console they are usually fixed size, but on the PC there are advantages to making them dependant on the end users machine. A larger memory pool requires defragmentation less often, so you spin less cycles less often.

I've seen games that will swap out hires textures when they detect an issue, but it's not common.

This topic is closed to new replies.

Advertisement