Sign in to follow this  
tufflax

GPU vs CPU: How much can the poor GPU take?

Recommended Posts

Hey! I'm just wondering: Why should I let as much as possible be calculated by the GPU, in shaders? I mean everyone that I've heard say anything about it says so. But there has to be a limit. What if "as much as possible" is really much. So I guess the question is this: Is there a good rule of thumb I can follow? And how do I notice that the GPU is starting to get too much to handle? Because, you know, the frame rate of my game can depend on both the CPU and the GPU, right? And then of course I may have a very fast GPU but I want the game to be palyable by as many people as possible, and they may not be quite so fortunate. Hmm...

Share this post


Link to post
Share on other sites
Profile profile profile!

You're correct there are limits to the processing power of the GPU, but a particular application will generally be bound by either one or other, depending on its nature, and thus it may be beneficial to move processing to the non-bottlenecked hardware, if possible.

And the only real way you can tell is to profile your application. The major hardware vendors (well, nVidia at least) both have documents on their developer portals that describe various ways to profile the GPU usage of your application (which isn't quite as easy as using a CPU-side profiler) and offer suggestions for alleviating the problems that can arise.

Share this post


Link to post
Share on other sites
Its not all down to GPU speed either, aside from some the very latest and greatest GPUs, the majority of all processing tasks that you might want to push onto the GPU rather than the CPU cannot actually be moved from the CPU because of the physical hardware limitations of the graphics card, such as having a maximum number of instructions per shader for example.

So in order to maintain a suitable range of back-compatibility you will typically have certain tasks that you will want to keep on the CPU; even if your particular graphics card could handle it your clients might be in-capabable of doing so (which is different from being able to do it just with shoddy performance)

Share this post


Link to post
Share on other sites
You are correct in all of your assumptions. The GPU's capabilities are limited, and frame-rate can be bogged down by either the GPU or the CPU.

With that in mind, most people tell you to move as much as you can to the GPU because that's exactly what it is there for. It's a second processor designed exclusively (DX10 changes this a bit) for performing graphical computations such as transforms, projections, lighting, etc...

So while a game consists of Rendering, Audio, User Input, Artificial Intelligence, Physics, Animation, File IO, networking, etc... only rendering currently has the option of being offloaded to a second processor. This leave more CPU cycles available to handle everything else.

Keep in mind, however, that there are hardware vendors out there developing cards for processing physics, networking, and artificial intelligence. None of these are mainstream yet, but there is a growing trend (or at least the hardware vendors would like to think so), towards moving as much non-core game related logic to tailored cards.

Also, with the advent of DirectX10, it is now possible to upload data to the GPU and have it perform calculations for you which can be saved back to memory via the Stream-Output Stage. This will have an impact on physics primarily, but it does show people's interest in freeing up the CPU for more high-level game logic.

Cheers!

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this