• Advertisement

Archived

This topic is now archived and is closed to further replies.

do GPU will disapear?

This topic is 5502 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Simple question, i mean, if we look at the newest card on the market (radeon 9000 series and geforce FX), we can see them as a freak parallel simd processor. This said, and knowing that Intel and AMD are working in exactly the same direction, I`m asking myself if sooner than we think, all operations will be done within de CPU. I mean, intel will soon master the .09 micron chip making process, so i``ll be easier to add alot more vertex shader/pixel shaders simd units directly to the cpu.

Share this post


Link to post
Share on other sites
Advertisement
You have a point, but I would say that GPUs probably are to stay for at least another year since the GPU''s are so extremely optimized for 3D and graphic calculations, even though the CPUs becomes so extremely much powerful for every day that passes....

And the memory available for the GPUs are much more powerful than the physical memory in your computer...and this memory is still far to expensive to be used in the computers as regular memory.

This is something you can see if playing e.g. Half-Life in software mode using a high resolution...even with a regular computer you are unlikely to reach max FPS, which wou''ll do about 20 times over with a new graphiccard.

Share this post


Link to post
Share on other sites
hi,

There are many type of pens :
one to write
other one to draw,

now you can do one pen which can do both things !!
But why do i buy your new-pen if i just want to write or just want to draw ?? because i can find other specific pen to write or to draw which are of course better than you two-in-one-pen !

it''s the same thing in this case, all compagny have a goal and a lead products, it may some affiliates programs like some graphics card integrate into motherboard...but never plus i think.

that''s my opinion.

++
VietCoder

Share this post


Link to post
Share on other sites
It's entirely possible... I wonder if the best way would be to have a single processor that did everything, or multiple processors, each with the ability to do 3d low-level instructions... where a game might dedicate one processor for physics/network/rules and the other for graphics, and a rendering app (like digital movie rendering) may use both cpus for 3d...



[edited by - Nypyren on January 21, 2003 5:30:23 PM]

Share this post


Link to post
Share on other sites
Ever heard of cell? It is the new processor coming out for the PS3, it contains several mini processors inside itself, they act as one and can do 100x more calculations than a pentium 4 2.5ghz. The ps3 will have a 4ghz Cell...

They can also draw power off of other cells through the internet! Cell rules!

Its not quite what your saying but it applies to the above poster

<- Digital Explosions ->
"Discipline is my sword, faith is my shield
do not dive into uncertainty, and you may live to reap the rewards" - (Unreal Championship)

Share this post


Link to post
Share on other sites
>>Ever heard of cell?<<
yes very very very impressive (its actually quite an old theory though) if they get it working realtime raytracing complex scenes will be a reality at 1600x1200

btw read this
http://www.sjbaker.org/steve/omniv/cpu_versus_gpu.html

http://uk.geocities.com/sloppyturds/kea/kea.html
http://uk.geocities.com/sloppyturds/gotterdammerung.html

Share this post


Link to post
Share on other sites
Don''t forget that the architecture of a PC is very different from that of a graphics card. For instance, while your CPU is multi-tasking in an OS, it is constantly clobbering registers and dirtying its cache. A GPU doesn''t suffer from this because it''s only handling one task at a time. In another case, a framebuffer held in PC RAM would be subject to page faults (fighting for space) and needs to be dereferenced from the page table of the currently running application before it''s referenced. The hardware framebuffer on your card is held in unpaged RAM and is available to your GPU in one step.
The only practical way to incorporate these features on a motherboard would be to add another CPU dedicated to graphics, and RAM that is exclusive to it. Voila, your card.
There''s a lot more to consider than that, but you get the picture.

Share this post


Link to post
Share on other sites
quote:
Original post by zedzeek
>>Ever heard of cell?<<
yes very very very impressive (its actually quite an old theory though) if they get it working realtime raytracing complex scenes will be a reality at 1600x1200

btw read this
http://www.sjbaker.org/steve/omniv/cpu_versus_gpu.html

http://uk.geocities.com/sloppyturds/kea/kea.html
http://uk.geocities.com/sloppyturds/gotterdammerung.html



that link you gave isn''t working.

Share this post


Link to post
Share on other sites
Well at some point CPUs will be fast enough to write a software engine that allows per-pixel stuff at 1600x1200, then GPUs will still be faster but beyond a certain point it''ll be hard to find a use for all the power!
Quite a while yet though!



Read about my game, project #1
NEW (18th December)2 new screenshots, one from the engine and one from the level editor



John 3:16

Share this post


Link to post
Share on other sites
Hard to find a use for the power? I don''t think so, cpu''s can always have more power because then you can always do MORE. For example, you might think you have the perfect AI, but with more power you can make it becomes smarter and can now simulate emotions or something like that.

There are some things that just cant be done and will never be able to be done on a cpu no matter how much power you have...for example realtime code breaking. If I had a computer that could crack 2048bit keycodes in 1 second, then everyone would use 4096bit codes and I wouldn''t be able to crack them anymore.

Those examples don''t relate to gaming/graphics but the same priniple applies, you could always simulate the more effects in a raytracer, so if you can 1600x1200 that''s just a quality issue, the CPU power will determine how good it looks. More power means it could look better. And there is no limit to that other than truely physically correct. And that would take a machine with as many states as the universe has, which cant happen.

Also, combining CPU and graphics is a bad idea for this very reason. It''s about parallelism, you can always seperate things to get more parallelism, like having a seperate card for decoding MPEG4 and a graphics card for graphics. doing them on the cpu means sharing them, and adding more cpus means you have to use unoptimized hardware for something that can be done faster on optimized hardware.

Share this post


Link to post
Share on other sites
Considering PCs are general purpose machines adding separate CPUs do do specific things is not economical, except for video cards. You can make a crazy-fast computer, but all it can do is one thing, like the Deep Crack machine (link at bottom)

The PS2 has 4 CPUs inside it (and a GPU). The main R5900, two co-prossesors (for 3d math) and the old PS1 R3000A (for input and old games). The CPU is only 400MHz, but still fast as heck. All this Cell stuff is just marketing hype, but still cool

PCs are pretty much stuck with the same architecture since the 1980s. Game consoles are specialized, but most of the games are overly-commercialized T-rated movie spinoffs made by people who just want money and are controlled by Sony/MS/Nintendo

http://www.eff.org/descracker/

Share this post


Link to post
Share on other sites
Agreed, I dont think we could ever have a CPU that was fast enough. There are always more AI/Physics/Effects you can add to your game engine. I would definately like to see it get to the point where we could have real-time ray-traced scenes -- for that you need massive CPU power (a GPU cant really help with that).

I dont think merging a CPU and GPU into one block would be a good idea either. One majour factor is cost. If you wanted to upgrade your CPU you would also have to upgrade your GPU (or vica-versa) and this would not be cost effective. Also, during normal usage (ie not gaming) wouldnt this portion of the CPU be idle? The Shader pipelines are way to specific to be used for anything else, making them generic would kill performance, unless we see like a 10X increase in CPU performance.

I would definately like to see something like the Cell technology where all of your components GPU/CPU/IO are blocks in a big grid and all communicate via the same high speed bus. That way your computer is extremely modular an configurable. Want another processor, no problem, another GPU for the weekends fragfest, easy, just drop it in and go... Hell even ram itself could be a 'Cell'. A Typical PC might contain say 6 Cells (CPU/GPU/MEMORY/IO/2 Free)

-=-=-=-=-=-=-=-=-=-=-=-=-=-
Mythical Masterpieces
http:\\www.leviathan3d.com [Site Under Construction]

[edited by - Entz on January 22, 2003 2:21:34 PM]

Share this post


Link to post
Share on other sites
I have to disagree. I think games only need a finite amount of computing power. Graphics-wise certainly - once it looks a as good as the newest star wars/monsters inc type stuff in real time then that''s pretty much perfect.
For other aspects it''s less certain - an ideal rts/management game might want to simulate the whole world which will take a LOT of power, but still finite. AI is very hungry if you want a good neural net, but simulating a few million neurons still has a finite drain on resources.

I''m not saying we''re anywhere near a realistic limit (photo-realism, realistic soft-body deformable worlds) but one day when we have maybe 1000X (100K X?) we''ll be getting close!



Read about my game, project #1
NEW (18th December)2 new screenshots, one from the engine and one from the level editor



John 3:16

Share this post


Link to post
Share on other sites
we don''t need that the cpu does the task actually. but what we need is some gpu/vpu/cpu wich can work together with the cpu.

todays gpus have their own ram, they sit on an agp bus and nearly can''t send data back. this has to change..

so, i think a gpu directly integrated onto the system (like in dual processor systems, one would be the gpu)

hyperthreading shows the way cpu''s will go, hammer from amd as well (up to 16 on one board, or so)..

it wouldn''t hurt if one of those multiple processors would be an "ultra fast vpu". wich could be used for vertex processing, or pixel processing, or physics math processing, or video stream processing, or audio processing, or what ever.. (okay, we would need several of those processors then, hehe:D but you get the idea:D)

get the stuff more near together..

"take a look around" - limp bizkit
www.google.com

Share this post


Link to post
Share on other sites
Well... evolution is the ultimate system engineer, and evolution teaches us that the more generalised a system''s capabilities are, the more survivable it is. However, it also teaches us that to maximise capabilities, a system needs specialised technologies. If you take humans to be at the top of the ladder: we are the ultimate in generalised capabilities because there is practically nothing we can''t do, but we also contain a number of highly specialised organs that enable us to do the things we do. We don''t see with our tongues, and we don''t digest food with our ears.

So... since all science is basically a process of idea evolution, I would have to say that I think that GPU''s (or versions thereof) will be around for a long long time. The ''cell'' idea has built in limitations. At this level it surpases existing limits and so it will do well, but once its limits are reached, it will be forced to start out-sourcing to specialist units again.

Using multi-functional CPU''s to do everything is designing a computer along the same principle as a nervous system where everything is basically just neurons (et al). But a computer isn''t a model of a brain, it''s a model of an organism and for it to be successful, it must diversify and specialise its functional areas.

The GPU is an evolution of the CPU, in the same way that an eye is an evolution of a chloroplast (although perhaps not quite so drastic a difference). It won''t devolve from there.

And with that said, Vive late nights and too much coffee!

Share this post


Link to post
Share on other sites
The problem I see with lots of specialist processors is that different machines will not just have different speed chips but different specialist parts - optimising etc is going to b a pain!



Read about my game, project #1
NEW (18th December)2 new screenshots, one from the engine and one from the level editor



John 3:16

Share this post


Link to post
Share on other sites

  • Advertisement