Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

do GPU will disapear?

This topic is 5592 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Simple question, i mean, if we look at the newest card on the market (radeon 9000 series and geforce FX), we can see them as a freak parallel simd processor. This said, and knowing that Intel and AMD are working in exactly the same direction, I`m asking myself if sooner than we think, all operations will be done within de CPU. I mean, intel will soon master the .09 micron chip making process, so i``ll be easier to add alot more vertex shader/pixel shaders simd units directly to the cpu.

Share this post


Link to post
Share on other sites
Advertisement
You have a point, but I would say that GPUs probably are to stay for at least another year since the GPU''s are so extremely optimized for 3D and graphic calculations, even though the CPUs becomes so extremely much powerful for every day that passes....

And the memory available for the GPUs are much more powerful than the physical memory in your computer...and this memory is still far to expensive to be used in the computers as regular memory.

This is something you can see if playing e.g. Half-Life in software mode using a high resolution...even with a regular computer you are unlikely to reach max FPS, which wou''ll do about 20 times over with a new graphiccard.

Share this post


Link to post
Share on other sites
hi,

There are many type of pens :
one to write
other one to draw,

now you can do one pen which can do both things !!
But why do i buy your new-pen if i just want to write or just want to draw ?? because i can find other specific pen to write or to draw which are of course better than you two-in-one-pen !

it''s the same thing in this case, all compagny have a goal and a lead products, it may some affiliates programs like some graphics card integrate into motherboard...but never plus i think.

that''s my opinion.

++
VietCoder

Share this post


Link to post
Share on other sites
It's entirely possible... I wonder if the best way would be to have a single processor that did everything, or multiple processors, each with the ability to do 3d low-level instructions... where a game might dedicate one processor for physics/network/rules and the other for graphics, and a rendering app (like digital movie rendering) may use both cpus for 3d...



[edited by - Nypyren on January 21, 2003 5:30:23 PM]

Share this post


Link to post
Share on other sites
Ever heard of cell? It is the new processor coming out for the PS3, it contains several mini processors inside itself, they act as one and can do 100x more calculations than a pentium 4 2.5ghz. The ps3 will have a 4ghz Cell...

They can also draw power off of other cells through the internet! Cell rules!

Its not quite what your saying but it applies to the above poster

<- Digital Explosions ->
"Discipline is my sword, faith is my shield
do not dive into uncertainty, and you may live to reap the rewards" - (Unreal Championship)

Share this post


Link to post
Share on other sites
>>Ever heard of cell?<<
yes very very very impressive (its actually quite an old theory though) if they get it working realtime raytracing complex scenes will be a reality at 1600x1200

btw read this
http://www.sjbaker.org/steve/omniv/cpu_versus_gpu.html

http://uk.geocities.com/sloppyturds/kea/kea.html
http://uk.geocities.com/sloppyturds/gotterdammerung.html

Share this post


Link to post
Share on other sites
Don''t forget that the architecture of a PC is very different from that of a graphics card. For instance, while your CPU is multi-tasking in an OS, it is constantly clobbering registers and dirtying its cache. A GPU doesn''t suffer from this because it''s only handling one task at a time. In another case, a framebuffer held in PC RAM would be subject to page faults (fighting for space) and needs to be dereferenced from the page table of the currently running application before it''s referenced. The hardware framebuffer on your card is held in unpaged RAM and is available to your GPU in one step.
The only practical way to incorporate these features on a motherboard would be to add another CPU dedicated to graphics, and RAM that is exclusive to it. Voila, your card.
There''s a lot more to consider than that, but you get the picture.

Share this post


Link to post
Share on other sites
quote:
Original post by zedzeek
>>Ever heard of cell?<<
yes very very very impressive (its actually quite an old theory though) if they get it working realtime raytracing complex scenes will be a reality at 1600x1200

btw read this
http://www.sjbaker.org/steve/omniv/cpu_versus_gpu.html

http://uk.geocities.com/sloppyturds/kea/kea.html
http://uk.geocities.com/sloppyturds/gotterdammerung.html



that link you gave isn''t working.

Share this post


Link to post
Share on other sites
Well at some point CPUs will be fast enough to write a software engine that allows per-pixel stuff at 1600x1200, then GPUs will still be faster but beyond a certain point it''ll be hard to find a use for all the power!
Quite a while yet though!



Read about my game, project #1
NEW (18th December)2 new screenshots, one from the engine and one from the level editor



John 3:16

Share this post


Link to post
Share on other sites
Hard to find a use for the power? I don''t think so, cpu''s can always have more power because then you can always do MORE. For example, you might think you have the perfect AI, but with more power you can make it becomes smarter and can now simulate emotions or something like that.

There are some things that just cant be done and will never be able to be done on a cpu no matter how much power you have...for example realtime code breaking. If I had a computer that could crack 2048bit keycodes in 1 second, then everyone would use 4096bit codes and I wouldn''t be able to crack them anymore.

Those examples don''t relate to gaming/graphics but the same priniple applies, you could always simulate the more effects in a raytracer, so if you can 1600x1200 that''s just a quality issue, the CPU power will determine how good it looks. More power means it could look better. And there is no limit to that other than truely physically correct. And that would take a machine with as many states as the universe has, which cant happen.

Also, combining CPU and graphics is a bad idea for this very reason. It''s about parallelism, you can always seperate things to get more parallelism, like having a seperate card for decoding MPEG4 and a graphics card for graphics. doing them on the cpu means sharing them, and adding more cpus means you have to use unoptimized hardware for something that can be done faster on optimized hardware.

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!