What is the difference between CPU and GPU?

Started by
18 comments, last by Matt-D 11 years, 4 months ago
Hi everyone,

As I understand, CPU is central processing unit which I can program it to do some works, and GPU is something you can program it do do some works, in parallel but its function is limited, e.g it can add but can not multiply. But as all computer world is based on binary bits and logic functions AND, OR, NOT, so if one tries, he can program GPU to do whatever CPU can do, right? Or there are works that CPU can do but GPU can not do?

Regards
Advertisement
The CPU is often a chip on the motherboard under a huge heatsink. The GPU is a chip (also under a heatsink) on an expansion card sitting on a PCI-e, AGP or PCI slot on the motherboard.

The CPU is where everything runs, windows etc. All programs also runs on the CPU. The GPU is an "extension" to the CPU. The GPU and CPU communicate using a bus, most often the PCIe bus. The CPU has its system memory and the GPU has its device memory.

The GPU is optimized for doing many calculations using floats. For this reason data is sent to the GPU memory to do a huge number of float operations. The bonus is that the video input to your monitor is hooked up to the GPU memory such that you can generate animation in real time on the GPU without tranferring the result back to system memory.

The GPU and CPU can do most of the same math operations, but the GPU is optimized for floats.
The CPU is indeed the Central Processing Unit, commonly referred to as 'the processor' of your computer. The GPU is the Graphics Processing Unit, which is a processor chip found on your graphics card.

The CPU is designed for general purpose usage and depending on its architecture it can provide security features (like protection rings in the x86 architecture), different operating modes and a whole bunch of possible extensions.

The GPU was originally designed solely for doing graphics-related jobs and can process a huge amount of floating point operations in comparison to your standard CPU. I don't know where you got your information about available operations on GPUs, but they can definitely do multiplication (would be quite terrible if they weren't able to do something this trivial).

These days the GPU is used more and more for general purpose tasks which involve heavy number crunching, but generally speaking a GPU's ISA (Instruction Set Architecture) is much more limited than your common CPU's ISA.

I gets all your texture budgets!

Thanks,


Can GPU do branch commands like 'if'? And I heard that some kinds of the best ray tracing engine can only do with CPU, is this true and why don't they use GPU for the maths?

Can a computer run on GPU alone? If not then why?


Regards

PS: @Radikalizm, the example of add and multiply is just an example because I don't know how to say it clearly
You don't write whole programs for GPU. What you do is that you have it run a small set of instructions (a kernel) in paralell many times over on a set of data.

For instance, you copy a buch of vertex data to the device memory, then ask the GPU to do math on the vertices. Then for each set of vertices you output a bunch of pixels which you also run a kernel on each pixel. These are called vertex and pixel shaders and are used to generate an image to be displayed on your monitor.

Recently, people started to use this pipeline for other purposes then generating an image. Namely number crunshing.
Yes, modern GPUs are able to do branching, but I believe CPUs are generally more efficient at doing branches, or at least it used to be like that.

You have to keep in mind that a GPU is designed to do a large amount of similar jobs at once in parallel, that's why it's so efficient at doing heavy calculations. It'd be extremely hard or maybe even impossible to build an operating system kernel which could run on this kind of architecture. Working with memory would also be a major issue.

About ray tracers, there are a lot of ray tracing implementations which run on the GPU or which at least use the GPU to accelerate the process. The problem with CPU - GPU interop is that you'll encountering latency issues when uploading data to the GPU or when reading back data. It's only worth it to upload a job to the GPU if the speed improvement you'll get from it takes into account the memory latency from the data upload and potential readback as well.

I gets all your texture budgets!

GPUs have been Turing complete for a while, so they can theoretically perform any calculation a CPU can perform.
In practice, the GPUs don't have any way to communicate with peripherals, just with the CPU, so that should make it obvious why one can't run a computer with a GPU alone.
It would also be extremely impractical to run some types of programs on a GPU - for example, a general purpose operating system - because the GPUs lack key features such as interrupts that are critical to implementing those programs in practice.
You can think of a GPU of basically being a very-very-very-wide SIMD CPU.
Normally when you compute x = y + z, those 3 variables represent single values.
e.g. [font=courier new,courier,monospace]2 + 2[/font], results in [font=courier new,courier,monospace]4[/font].
With SIMD, those 3 variables represent arrays.
e.g. [font=courier new,courier,monospace][2,7,1] + [2,1,1][/font] results in [font=courier new,courier,monospace][4,8,2][/font].

Each instruction is simultaneously executed over a large number of values, so that you get more work done faster.

You want to avoid branching with this kind of architecture, because you end up wasting a lot of your SIMD abilities.

e.g. take the code
if( y > 5 )
x = y;
else
x = z;

If we execute that with our data of y=[2,7,1] and z=[2,1,1], this results in:
if( y > 5 ) [false, true, false]
x = y; [N/A, 7, N/A]
else [true, false, true]
x = z; [2, N/A, 1]
//finally x = [2, 7, 1]
The GPU has had to execute both the 'if' and the 'else', but ignoring some parts of it's arrays for each branch, and merging the results at the end. This is wasteful -- e.g. say the GPU has the capability to work on 3 bits of data at once, in this example it's only working on 1 or 2 bits of data at once.
The more nested branches you add, the more wasteful this becomes... so those kinds of programs are better of running on regular CPU's (or being redesigned to better suit this style of hardware).

In practice, the GPUs don't have any way to communicate with peripherals, just with the CPU
... the GPUs lack key features such as interrupts that are critical to implementing those programs in practice.
Out of interest's sake, GPUs can generate CPU interrupts and write to arbitrary addresses (which might be mapped to peripherals), but these abilities aren't exposed on PC's (outside of the driver).
Many thanks guys.

Oh, and thank you Hodgman, for detailed explanation

Regards
I think about it like this:

CPU does the math, GPU does the rendering.

Ta-Da :) (I'm not a smart programmer tongue.png ).

I'm a game programmer and computer science ninja !

Here's my 2D RPG-Ish Platformer Programmed in Python + Pygame, with a Custom Level Editor and Rendering System!

Here's my Custom IDE / Debugger Programmed in Pure Python and Designed from the Ground Up for Programming Education!

Want to ask about Python, Flask, wxPython, Pygame, C++, HTML5, CSS3, Javascript, jQuery, C++, Vimscript, SFML 1.6 / 2.0, or anything else? Recruiting for a game development team and need a passionate programmer? Just want to talk about programming? Email me here:

hobohm.business@gmail.com

or Personal-Message me on here !

This topic is closed to new replies.

Advertisement