Does this prediction seem valid? (future of graphics cards)

Started by
19 comments, last by intrest86 16 years, 4 months ago
Hello everyone, I was thinking about the future of computers, in particular to graphics cards. Tell me what you guys think about my prediction, whether it is valid or am I just stupid to even think such a thing :P Anyways, It wasn't that long ago that we were still using 1 core CPUs, now most people have dual cores, and more and more people get buying quad cores. There are already 16 core CPUs out there, but not really available to the public. These CPU's have clock speeds of 2.4ghz - 3.0ghz. Currently, video cards have clock speeds of around 1.5ghz (max that Ive seen). I'm thinking that soon video cards will become obsolete, because a CPU will have so many processors that it could dedicate 1, 2 or even 3 cores to just computing graphics. We would have no need for special hardware because the CPU is already faster. All the graphics computation would probably run in software mode. We are moving into the 64 bit operating systems now so we can expect to get computers that have way more than 4 gigs of ram, so the memory on the graphics card would probably not even be needed either. What do you guys think?
-------------------------------------Physics Labhttp://www.physics-lab.netC++ Labhttp://cpp.physics-lab.net
Advertisement
Your average GeForce 8800 has 128 processors sharing what is the equivalent of a single 768Mo L2 cache. It provides efficient hardware scheduling and synchronization facilities and atomic operations currently unheard of on mainstream processor architecture. It also allows perfect control of the L1 cache through assembly. All of it runs at around 1.6 GHz. Though they are currently quite good for parallel computation, general-purpose processors and the multi-core architecture design still have a long way to go before they can rival with GPU processors for massively parallel computations (such as those involved in rendering).
Take a look into how hardware and software are built in layers upon layers, each layer specializing in what it's good at. Adding more cpus or ram won't do the trick. We need research into parallel programming and dynamic optimization techniques, along with massive distributed resources on ultra fast networks, before a paradigm shift can happen.

Dedicated graphics cards will probably still be around 25 years from now.
It is I, the spectaculous Don Karnage! My bloodthirsty horde is on an intercept course with you. We will be shooting you and looting you in precisely... Ten minutes. Felicitations!
I'm not much of a hardware guy, but what you guys said makes sense.
-------------------------------------Physics Labhttp://www.physics-lab.netC++ Labhttp://cpp.physics-lab.net
I think there is a good chance we'll see more of the graphics going back to the 'main processor'.

The original reason for developing graphics cards in the first place was that the main processor didn't have the power to keep things running smoothly. We are already seeing graphics cards becoming more general, and people using them to calculate other things, it only makes sense to experiment with tieing the memory and processor of the graphics card closer to the memory and processor of the main mother board.

At the very least, the graphics processor and memory should look to the computer like nothing more than a second processor, and be fully usable as such for whatever you want. Currently graphics cards are great for pushing data to, but returning data is a major failing point (at least last I read ages ago).
Old Username: Talroth
If your signature on a web forum takes up more space than your average post, then you are doing things wrong.
I disagree. Jack-of-all-trades hardware is a good money-saving option, but there will always be specialised graphics hardware, since there will always be algorithms which are used heavily for graphics but not used frequently anywhere else.

However, I've recently experimented in the use of OpenGL to do matrix maths for stuff other than graphics (Push the view matrix, perform character movements, and pop your matrix, then return) and Its likely that video hardware, physics hardware, and a few other things will be combined into a single card with specialised registers and co-processors, so that this kind of thing gives a real performance boost. The more generic stuff will always be done on the CPU.

I just wanted to see if he would actually do it. Also, this test will rule out any problems with system services.
I think we'll start seeing high-end graphics chips more tightly integrated with the main processor like Intel's Integrated Graphics, but it will be a while before graphics processors are expired entirely. Expect to see "AMD/ATI Athlon 64 w/ integrated 9900GTS" in a few years.
....[size="1"]Brent Gunning
Nope, your prediction seems highly unlikely unless the nature of the problem of displaying graphics dramatically changes. As it stands now, graphics is an easily parallelized problem [as was mentioned above], while 'general' computing is mostly a linear problem, sliced into little chunks, which means graphics hardware is designed to work well for this vector processing model while general purpose processors are designed more to work linearly [and actually aggressively attempt to change a linear problem into a number of tiny parallel problems, through various predictions and complicated book keeping, all of which is unnecessary for graphics and would thus be redundant in graphics hardware].

It generally boils down to a difference in the type of problem that is being solved.

Also, clock speed is a rather meaningless metric to compare in this instance, [and frankly is a rather meaningless metric to compare in general, but whatever]. A modern graphics chip does in one cycle what would take possibly hundreds or even thousands to complete in a modern general purpose CPU. Frankly, even the difference between various CPU's vary wildly, which is why you see huge differences in bench marks on the same clock rates but different processor architectures [like why a 2 ghz opteron beats the hell out of a 3.4 ghz p4. The opteron simply does more per cycle than does the p4, even if the p4 has more cycles to do things. Check out sun's Niagara 2 for a good example of this].

Clock speed is a pretty useless metric alone.
Quote:Original post by Don CarnageDedicated graphics cards will probably still be around 25 years from now.
They haven't existed anywhere near that long so that's a totally unfounded prediction. In 25 years computers will probably be totally different to now. I mean 25 years ago compared to now is nothing like the difference between now and 2030 will be, due to Moore's law.

Quote:Original post by d000hg
Quote:Original post by Don CarnageDedicated graphics cards will probably still be around 25 years from now.
They haven't existed anywhere near that long so that's a totally unfounded prediction.


Well, yes and no; pluggable graphics boards haven't existed for that long, however even in the early 80s there were dedicated co-processors for display reasons (such as the ANTIC (linkage) in the Atari 400. Granted, they aren't as powerful as the current chips but the idea was the same; give it a list of work to process via DMA and it got on with it.

This topic is closed to new replies.

Advertisement