Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Why all the gpu/ video memory?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
12 replies to this topic

#1 cozzie   Members   -  Reputation: 1758

Like
0Likes
Like

Posted 26 May 2013 - 02:56 PM

Hi,

I've been doing quite some graphics programming over the years, but it's just now that I'm wondering why we need that much video memory as available today.

For example, using d3d/directx:
- you create some buffers and surfaces
- load a couple of tens or hundreds MB's textures
- you create lots or a few big vertexbuffers and indexbuffers
- you compile/ load a few or maybe more shaders (VS/PS)
(higher shader models possibly needing more gpu memory?)

I must be forgetting a crucial element, when thinking this way.
Who can help me out? (and maybe others, unless It's just me)
Maybe with some example numbers for the subjects above (and more)

Commercial talks are that you need 2gb gddr5 on your graphics card for higher resolutions. Do you really? Or is it for having massive and lots of textures?

Sponsor:

#2 Bacterius   Crossbones+   -  Reputation: 9262

Like
0Likes
Like

Posted 26 May 2013 - 03:20 PM

Texture memory quickly adds up. Also, there are lots of temporary buffers floating around. Also keep in mind that GPU's aren't used only for games but also for some GPU-memory-hungry applications such as video processing, high-definition rendering, specific classes of scientific applications, so there is incentive in providing enough memory for those applications to work properly.

 

Besides, memory is relatively inexpensive anyway, so it's more cost-effective to have too much of it than not enough.


The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#3 Waterlimon   Crossbones+   -  Reputation: 2634

Like
2Likes
Like

Posted 26 May 2013 - 03:21 PM

Think about how much memory you need to contain the pixel buffers for lets say 2 or 3 big screens (because you want a 180 degree view...).

Now add some float buffers of the same size for fancy postprocessing, a bunch of extra ones because you use deferred rendering, depth buffers etc.

And some antialiasign tehnique that makes the screen ones twice the width and height.

Some HD textures (yay for bump map and normal map and light map and multitexture map and reflection map and blah blah)

A couple million polygons with random animation datas scattered around.

Oooh i know what if we also ran the particle physics and terrain gen on the GPU what a brilliant idea! :DDD

Lets just say it adds up.

o3o


#4 Chris_F   Members   -  Reputation: 2459

Like
1Likes
Like

Posted 26 May 2013 - 03:56 PM

Just do the math.

 

A single 8K uncompressed 8-bit RGBA texture is 256 MB. Heaven forbid you need 16-bit or even 32-bit float for something.

 

You could never have enough graphics memory. Double the amount of memory and artists will want to add an extra MIP level to the textures for higher quality, which then quadruples the amount of memory you would need.



#5 TheChubu   Crossbones+   -  Reputation: 4750

Like
0Likes
Like

Posted 26 May 2013 - 04:06 PM

Pretty much we didn't needed that much because texture sizes were around 512x512 for the majority of the assets in games, with "high detail" stuff being 1024x1024, maaaybe that superduper one-of-a-kind weapon having a 2k texture in PC version. Memory usage topped 1Gb or 1.5Gb for the "ultra" settings.

 

Texture sizes will get bigger with the new console generation, if 1k or 2k become the standard size for regular assets, i'd expect memory usage rising to 2Gb or more regularly.

 

If you have Skyrim, try to decompress the bsas and check out the texture sizes of weapons, furniture, etc. Then compare against the official high resolution texture pack.


"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

 

My journals: dustArtemis ECS framework and Making a Terrain Generator


#6 phantom   Moderators   -  Reputation: 7555

Like
2Likes
Like

Posted 26 May 2013 - 04:21 PM

A 1920*1080 screen requires
- 7.91Meg @ 32bit/pixel
- 15.8Meg @ 64bit/pixel (16bit/channel)

Throw in some AA and you can increase that by 2 or 4 times the amount (32 to 64meg) for just a single colour buffer.
Throw in at least one depth stencil which will take 8,16 or 32meg depending on multi-sample settings.

So, before we've done anything a simple HDR rendering setup can eat 160meg on just the 16bpc HDR buffer, z-buffer and two final colour buffers (double buffering support, more buffers you want the more ram you take).

At which point you can throw in more off screen targets for things like post processing steps, depending on your render setup, and maybe some buffers to let you do tile based tone mapping and suddenly you can be eating a good 400meg before you've even got to shaders, constant buffers, vertex and index buffers and textures (including any and all shadow maps required.)

Various buffers in there are likely to be double buffered, if not by you then by the driver, which will also increase the memory footprint nicely.

Textures however are the killer; higher resolution and more colour depth, even with BC6 and BC7 compression, is going to be expensive and when you throw in things like light mapping and the various buffers used for that which can't be compressed.

Basically stuff takes up a LOT of space.

#7 Sik_the_hedgehog   Crossbones+   -  Reputation: 1833

Like
0Likes
Like

Posted 26 May 2013 - 05:29 PM

Double the amount of memory and artists will want to add an extra MIP level to the textures for higher quality, which then quadruples the amount of memory you would need.

This is something I'll never understand, beyond a threshold the texture is so blurry that the difference between more mipmaps and just doing bilinear filtering on the last one will be pretty much unnoticeable. Unless you mean adding a new MIP level with higher resolution, but as far as I know that's more bound to the screen resolution than to the video memory (like how the Vita has half the VRAM of the PS3 but also half the usual resolution, so the net result is that you could technically cram in twice as many textures just by dropping the highest MIP level).

 

Of course, with 4K screens around the corner...


Don't pay much attention to "the hedgehog" in my nick, it's just because "Sik" was already taken =/ By the way, Sik is pronounced like seek, not like sick.

#8 Frenetic Pony   Members   -  Reputation: 1393

Like
0Likes
Like

Posted 26 May 2013 - 05:50 PM

Just do the math.

 

A single 8K uncompressed 8-bit RGBA texture is 256 MB. Heaven forbid you need 16-bit or even 32-bit float for something.

 

You could never have enough graphics memory. Double the amount of memory and artists will want to add an extra MIP level to the textures for higher quality, which then quadruples the amount of memory you would need.

 

Thoughts "that can't be right" a few seconds of math later... "yeah, crap, wow."

 

If you use virtualized textures I'm not sure what you'll be using 8 gigs (well, less OS but whatever) of ram for. But there's probably something that could fill it up. Hundreds of cached shadow maps or voxelized scene representations or something.



#9 Chris_F   Members   -  Reputation: 2459

Like
2Likes
Like

Posted 26 May 2013 - 05:56 PM

Unless you mean adding a new MIP level with higher resolution

 

That is indeed what I meant. You don't need a 4K screen to get benefit from having 4K textures over 2K textures. When was the last time you saw a game in which texel size never exceeded pixel size on a 1080p screen? Not even Rage with its megatextures can manage to do that. Sure, things look good at a distance or at 60mph, but if you get close up and inspect things you'll see the large texels and compression artifacts.



#10 Vilem Otte   Crossbones+   -  Reputation: 1560

Like
0Likes
Like

Posted 26 May 2013 - 08:01 PM

But still, Megatextures (or Sparse Virtual Textures, or ... - there are few different names for them afair, but basically it's about the same thing) are better than a solution without them. The problem might be with their creation.

 

Still for sub-texel details (if I may call it like that), there are actually no real solutions these days (or none I've heard about). Maybe some runtime generated fractal textures might help here - fully procedurally generated ofc - that won't actually consume memory, thus being viable ... although each surface would need different one, in order to model realistic surface material.


My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com


#11 Frenetic Pony   Members   -  Reputation: 1393

Like
0Likes
Like

Posted 26 May 2013 - 11:52 PM

Unless you mean adding a new MIP level with higher resolution

 

That is indeed what I meant. You don't need a 4K screen to get benefit from having 4K textures over 2K textures. When was the last time you saw a game in which texel size never exceeded pixel size on a 1080p screen? Not even Rage with its megatextures can manage to do that. Sure, things look good at a distance or at 60mph, but if you get close up and inspect things you'll see the large texels and compression artifacts.

 

Err, the low res textures and artifacts are due entirely to disc size limits. There aren't any particular technical limitations to virtualized textures, you can have a one texel to one screen pixel texture size (based on near clipping plane) if you want. There's also no real problem with creation of virtualized textures, in fact it's better all around than the "standard" way of doing things, as there's no memory limits on size or variety of textures. Nor are they required to have all pixels stored uniquely, you can repeat textures all you want, in fact Trials Evolution already does so, and is a fairly small downloadable game. You can also eliminate unwrapping UV maps for your artists if you want to do so (GDC... 11 or 12, a Lionhead paper).

 

It is pretty much the closest thing you'll find to a silver bullet in terms of texture memory problems. Even with 8 gigs of ram there's still a lot of benefit to using such a system. But as I said, I can already just imagine other ways of filling up a ton of ram quickly for a game.


Edited by Frenetic Pony, 26 May 2013 - 11:54 PM.


#12 cozzie   Members   -  Reputation: 1758

Like
0Likes
Like

Posted 27 May 2013 - 02:50 PM

Wow, I didn't expect that muchreactions. Thanks :)

What I conclude is that I didn't miss out any areas where the memory is used, just didn't realize the impact with for example textures. I'll definately mke some memory manager in my engine (even if it's only to get a good view on things).

#13 radioteeth   Prime Members   -  Reputation: 1135

Like
0Likes
Like

Posted 27 May 2013 - 10:44 PM

over the next few years, I'd say 3D-texturing applications.






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS