Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


cpu, gpu or maybe no constrained?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • This topic is locked This topic is locked
7 replies to this topic

#1 fir   Members   -  Reputation: -460

Like
0Likes
Like

Posted 16 July 2014 - 08:51 AM

Are todays games (by default im thinking on desktop, the x86 windows 

games) more cpu or gpu constrained (by constrained i mean the situation where its quality 'suffers' in some kind of quality by the potential lacks of the platform)

Is this gpu cpu or maybe something other, or maybe nothing?

 

Sorry duplicated  by mistake - could be deleted


Edited by fir, 16 July 2014 - 09:04 AM.


Sponsor:

#2 Ravyne   GDNet+   -  Reputation: 10212

Like
4Likes
Like

Posted 16 July 2014 - 09:22 AM

Depends entirely, there's no sure answer, but in general I'd say that getting acceptable performance out of the GPU takes more effort, The primary reason for this has more to do with scalability than with computational constraints. For just about any modern CPU you can buy today, the capability difference between the top-tier and the lower-end of 'mainstream' (i.e. not 'budget') isn't going to differ by more than a factor of 3 or so if your engine can effectively use more than 2-4 'heavy' threads of execution. Most games don't, and the factor decreases to something under a factor of 2, perhaps a bit less. A top-end part is 4 fast cores with hyper-threading, the lower-end part is 2 nearly-as-fast cores with hyperthreading -- you give up 2 cores, but only 25-33% of the clock-speed, if you weren't using the other cores heavily, they weren't giving you much advantage in the higher-end CPU. Its been this way for awhile, CPUs have only made incremental (~10% per generation) improvements for the past 5 years or so.

 

GPUs tend to span a broader range between the highest and lowest-end part -- typically about 6-8x within a single generation, and a generation-to-generation gain for similarly-priced parts is around 20%. Its not uncommon to find a brand new CPU paired with an inherited GPU from a couple generations ago, either, so Ideally you're probably supporting GPUs that span down as far as 1/16th as powerful as an enthusiast-level GPU at time of launch. The only break you catch is that its fairly trivial to scale along with graphics performance -- heck, just reducing the resolution by half means you can run at the same quality using only 1/4th of the GPU resources. Typically, games will target right at about the middle of available GPU power because its just as easy to scale up for users with extra-powerful GPUs -- just render at higher resolutions and throw in some more eye-candy. You can't just do more stuff on the CPU-side of things easily, there's not much extra you can do without affecting gameplay.

 

GPUs also require their own special kind of care to get the most performance out of them too, and doing things in the wrong way can really hose your performance quickly. The patterns to get this performance are well understood, they just take time and care to implement, and often combined with a variety of shaders tuned to each GPU generation and performance/image-quality level, simply takes effort because of the combinatorial (e.g. Effort = patterns * GPU Generations * Perf-IQ levels; the product of factors, not their sum) nature.



#3 fir   Members   -  Reputation: -460

Like
0Likes
Like

Posted 16 July 2014 - 09:54 AM

Depends entirely, there's no sure answer, but in general I'd say that getting acceptable performance out of the GPU takes more effort, The primary reason for this has more to do with scalability than with computational constraints. For just about any modern CPU you can buy today, the capability difference between the top-tier and the lower-end of 'mainstream' (i.e. not 'budget') isn't going to differ by more than a factor of 3 or so if your engine can effectively use more than 2-4 'heavy' threads of execution. Most games don't, and the factor decreases to something under a factor of 2, perhaps a bit less. A top-end part is 4 fast cores with hyper-threading, the lower-end part is 2 nearly-as-fast cores with hyperthreading -- you give up 2 cores, but only 25-33% of the clock-speed, if you weren't using the other cores heavily, they weren't giving you much advantage in the higher-end CPU. Its been this way for awhile, CPUs have only made incremental (~10% per generation) improvements for the past 5 years or so.

 

GPUs tend to span a broader range between the highest and lowest-end part -- typically about 6-8x within a single generation, and a generation-to-generation gain for similarly-priced parts is around 20%. Its not uncommon to find a brand new CPU paired with an inherited GPU from a couple generations ago, either, so Ideally you're probably supporting GPUs that span down as far as 1/16th as powerful as an enthusiast-level GPU at time of launch. The only break you catch is that its fairly trivial to scale along with graphics performance -- heck, just reducing the resolution by half means you can run at the same quality using only 1/4th of the GPU resources. Typically, games will target right at about the middle of available GPU power because its just as easy to scale up for users with extra-powerful GPUs -- just render at higher resolutions and throw in some more eye-candy. You can't just do more stuff on the CPU-side of things easily, there's not much extra you can do without affecting gameplay.

 

GPUs also require their own special kind of care to get the most performance out of them too, and doing things in the wrong way can really hose your performance quickly. The patterns to get this performance are well understood, they just take time and care to implement, and often combined with a variety of shaders tuned to each GPU generation and performance/image-quality level, simply takes effort because of the combinatorial (e.g. Effort = patterns * GPU Generations * Perf-IQ levels; the product of factors, not their sum) nature.

alright interesting answer (i may say that those estimation on span of processing powers of the cpu are consistent with my own (as i was doing some benchmarks) - though it is an answer on somewhat different question : "what is harder to optymize (or maintain) and why"- i was more curious what kind of hardware speedup cpu or gpu would be more welcome in games and had a bigger influence on outcome quality (maybe it is hard to answer but i wonder)- or maybe yet todays game quality depends mainly on content not on processing power?

 

processing power is important as it for sure can make some work easier (no need for maintaining complex ptymization structures) but is it still so important or not so important? Hard to answer by myslelf 



#4 Ravyne   GDNet+   -  Reputation: 10212

Like
2Likes
Like

Posted 16 July 2014 - 10:07 AM


or maybe yet todays game quality depends mainly on content not on processing power?

 

This.

 

We have an abundance of both CPU and GPU power today, and while we will always be happy for more and more to come, we really have reached a point where either are almost always more than enough. The worlds most efficient rendering engine will always look like crap if you feed it shitty content to render.

 

In terms of where content intersects with the technical, there are really two big areas of interest today that I know of -- unified, physically-correct materials BRDF, and real-time, realistic lighting. We used to fake all of this. We'd pre-bake lighting, or we'd create materials independently of one another and then tweak then to look "right" under local lighting conditions. We have enough GPU power now where there's less and less faking of these things.



#5 fir   Members   -  Reputation: -460

Like
-3Likes
Like

Posted 16 July 2014 - 10:50 AM

 


or maybe yet todays game quality depends mainly on content not on processing power?

 

This.

 

We have an abundance of both CPU and GPU power today, and while we will always be happy for more and more to come, we really have reached a point where either are almost always more than enough. The worlds most efficient rendering engine will always look like crap if you feed it shitty content to render.

 

In terms of where content intersects with the technical, there are really two big areas of interest today that I know of -- unified, physically-correct materials BRDF, and real-time, realistic lighting. We used to fake all of this. We'd pre-bake lighting, or we'd create materials independently of one another and then tweak then to look "right" under local lighting conditions. We have enough GPU power now where there's less and less faking of these things.

 

 

still as speaking on games it seem to me that inagme physics

is still weak in present games, (though i cannot be 100% sure how it looks like as im not playing to much games)

by physics i mean real physics of destruction (when you throw a bomb and building falls down) not this extremally poor physics as a throwing the barrels or boxes

But i dont know what is a reason for lask of such real physics

(at least ehen speaking of damages not al physics as physics is misleading term at all, two main areas of this 'physics' would be probably only 'destruction' and 'biodynamics')

if it is lack of cpu power lack of gpu power or just lack of algorithmic solutions here - probably the last one



#6 ApochPiQ   Moderators   -  Reputation: 17388

Like
7Likes
Like

Posted 16 July 2014 - 11:55 AM

Actually, there are perfectly good solutions for limited destructive environments. They're used in the film effects industry all the time. There's even some middleware support for doing destruction in realtime for games.

The problem is a lot more subtle than just "not enough CPU" or "not enough GPU." In some cases, it may be possible to compute destructed environments in realtime, but visualizing them is too expensive. In other cases, doing realistic destruction is incredibly CPU expensive but trivial to visualize.

Where it gets tricky is when you want to support both cases in the same game/simulation. Film effects are rendered in the minutes-per-frame ballpark for this reason.

It's practical enough to get limited effects (see The Force Unleashed, Red Faction, etc.) but getting truly generalized effects is going to be a hard problem for a long time. What if I want to take one object (a solid brick) and blast it into literally a million pieces? This kind of edge case crops up all over destructive simulation but has no generally good solution.

#7 fir   Members   -  Reputation: -460

Like
-10Likes
Like

Posted 16 July 2014 - 01:47 PM

AppochPiQ sorry but do not wait for an answer, becouse of your previous behaviour which was exceptionally primitive, sorry , i do not talk with such ppl



#8 frob   Moderators   -  Reputation: 27640

Like
11Likes
Like

Posted 16 July 2014 - 03:47 PM

And... we're done here.

Check out my book, Game Development with Unity, aimed at beginners who want to build fun games fast.

Also check out my personal website at bryanwagstaff.com, where I write about assorted stuff.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS