Jump to content
  • Advertisement
Sign in to follow this  
cozzie

Why all the gpu/ video memory?

This topic is 1853 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,

I've been doing quite some graphics programming over the years, but it's just now that I'm wondering why we need that much video memory as available today.

For example, using d3d/directx:
- you create some buffers and surfaces
- load a couple of tens or hundreds MB's textures
- you create lots or a few big vertexbuffers and indexbuffers
- you compile/ load a few or maybe more shaders (VS/PS)
(higher shader models possibly needing more gpu memory?)

I must be forgetting a crucial element, when thinking this way.
Who can help me out? (and maybe others, unless It's just me)
Maybe with some example numbers for the subjects above (and more)

Commercial talks are that you need 2gb gddr5 on your graphics card for higher resolutions. Do you really? Or is it for having massive and lots of textures?

Share this post


Link to post
Share on other sites
Advertisement

Texture memory quickly adds up. Also, there are lots of temporary buffers floating around. Also keep in mind that GPU's aren't used only for games but also for some GPU-memory-hungry applications such as video processing, high-definition rendering, specific classes of scientific applications, so there is incentive in providing enough memory for those applications to work properly.

 

Besides, memory is relatively inexpensive anyway, so it's more cost-effective to have too much of it than not enough.

Share this post


Link to post
Share on other sites
Think about how much memory you need to contain the pixel buffers for lets say 2 or 3 big screens (because you want a 180 degree view...).

Now add some float buffers of the same size for fancy postprocessing, a bunch of extra ones because you use deferred rendering, depth buffers etc.

And some antialiasign tehnique that makes the screen ones twice the width and height.

Some HD textures (yay for bump map and normal map and light map and multitexture map and reflection map and blah blah)

A couple million polygons with random animation datas scattered around.

Oooh i know what if we also ran the particle physics and terrain gen on the GPU what a brilliant idea! :DDD

Lets just say it adds up.

Share this post


Link to post
Share on other sites

Just do the math.

 

A single 8K uncompressed 8-bit RGBA texture is 256 MB. Heaven forbid you need 16-bit or even 32-bit float for something.

 

You could never have enough graphics memory. Double the amount of memory and artists will want to add an extra MIP level to the textures for higher quality, which then quadruples the amount of memory you would need.

Share this post


Link to post
Share on other sites

Pretty much we didn't needed that much because texture sizes were around 512x512 for the majority of the assets in games, with "high detail" stuff being 1024x1024, maaaybe that superduper one-of-a-kind weapon having a 2k texture in PC version. Memory usage topped 1Gb or 1.5Gb for the "ultra" settings.

 

Texture sizes will get bigger with the new console generation, if 1k or 2k become the standard size for regular assets, i'd expect memory usage rising to 2Gb or more regularly.

 

If you have Skyrim, try to decompress the bsas and check out the texture sizes of weapons, furniture, etc. Then compare against the official high resolution texture pack.

Share this post


Link to post
Share on other sites
A 1920*1080 screen requires
- 7.91Meg @ 32bit/pixel
- 15.8Meg @ 64bit/pixel (16bit/channel)

Throw in some AA and you can increase that by 2 or 4 times the amount (32 to 64meg) for just a single colour buffer.
Throw in at least one depth stencil which will take 8,16 or 32meg depending on multi-sample settings.

So, before we've done anything a simple HDR rendering setup can eat 160meg on just the 16bpc HDR buffer, z-buffer and two final colour buffers (double buffering support, more buffers you want the more ram you take).

At which point you can throw in more off screen targets for things like post processing steps, depending on your render setup, and maybe some buffers to let you do tile based tone mapping and suddenly you can be eating a good 400meg before you've even got to shaders, constant buffers, vertex and index buffers and textures (including any and all shadow maps required.)

Various buffers in there are likely to be double buffered, if not by you then by the driver, which will also increase the memory footprint nicely.

Textures however are the killer; higher resolution and more colour depth, even with BC6 and BC7 compression, is going to be expensive and when you throw in things like light mapping and the various buffers used for that which can't be compressed.

Basically stuff takes up a LOT of space.

Share this post


Link to post
Share on other sites

Double the amount of memory and artists will want to add an extra MIP level to the textures for higher quality, which then quadruples the amount of memory you would need.

This is something I'll never understand, beyond a threshold the texture is so blurry that the difference between more mipmaps and just doing bilinear filtering on the last one will be pretty much unnoticeable. Unless you mean adding a new MIP level with higher resolution, but as far as I know that's more bound to the screen resolution than to the video memory (like how the Vita has half the VRAM of the PS3 but also half the usual resolution, so the net result is that you could technically cram in twice as many textures just by dropping the highest MIP level).

 

Of course, with 4K screens around the corner...

Share this post


Link to post
Share on other sites

Just do the math.

 

A single 8K uncompressed 8-bit RGBA texture is 256 MB. Heaven forbid you need 16-bit or even 32-bit float for something.

 

You could never have enough graphics memory. Double the amount of memory and artists will want to add an extra MIP level to the textures for higher quality, which then quadruples the amount of memory you would need.

 

Thoughts "that can't be right" a few seconds of math later... "yeah, crap, wow."

 

If you use virtualized textures I'm not sure what you'll be using 8 gigs (well, less OS but whatever) of ram for. But there's probably something that could fill it up. Hundreds of cached shadow maps or voxelized scene representations or something.

Share this post


Link to post
Share on other sites

Unless you mean adding a new MIP level with higher resolution

 

That is indeed what I meant. You don't need a 4K screen to get benefit from having 4K textures over 2K textures. When was the last time you saw a game in which texel size never exceeded pixel size on a 1080p screen? Not even Rage with its megatextures can manage to do that. Sure, things look good at a distance or at 60mph, but if you get close up and inspect things you'll see the large texels and compression artifacts.

Share this post


Link to post
Share on other sites

But still, Megatextures (or Sparse Virtual Textures, or ... - there are few different names for them afair, but basically it's about the same thing) are better than a solution without them. The problem might be with their creation.

 

Still for sub-texel details (if I may call it like that), there are actually no real solutions these days (or none I've heard about). Maybe some runtime generated fractal textures might help here - fully procedurally generated ofc - that won't actually consume memory, thus being viable ... although each surface would need different one, in order to model realistic surface material.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!