• Advertisement
Sign in to follow this  

Computational Speed Barrier

This topic is 4322 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Are there any physically unrealizable speeds for the computer (or computation in general)? [Edited by - arithma on April 19, 2006 9:45:18 AM]

Share this post


Link to post
Share on other sites
Advertisement
A computer that can simulate itself at faster than realtime speed would be paradox.

Share this post


Link to post
Share on other sites
Speed of light is limit for speed of mass.
Speed of computer is nothing but frequency of the clock. But the frequency of the lock is entangled to the critical path in functionality hardware.
I believe this is much more specific.

Share this post


Link to post
Share on other sites
Speed of light is limit for all things. Mass can't go the speed of light (though theoretically can go any fraction below 100%), but energy can. Electricity can only travel at the speed of light, which puts a limit on the frequency you can obtain. Much more serious limits exist, though, such as the capacitance in the wires causing signal degredation. Once you get to a certain scale, quantum mechanics will start to play a role, and before that, the magentic fields caused by the electricity will do all kinds of nice things to the rapidly fluctuating signals.

More important than CPU speed, though, is system speed. A 109 GHz CPU clock isn't much good if the ram is still crawling along a few hundred megahurtz. It's difficult to get the BUS faster because it must carry signals a very long distance (compared to inside a single chip) and that allows more room for capacitance, signal bounce, etc to play a large role.

Supposedly, there is a company that expects to create just such a computer (PHz) using optics instead of electronics, and that would solve some of the problems (easier to prevent interference, etc), but there hasn't been enough work in that field yet to really know how it will work out AFAIK. Even if it does, light speed is still an upper bound (as far as is known).

Share this post


Link to post
Share on other sites
Quote:
Original post by arithma
Speed of light is limit for speed of mass.
Speed of computer is nothing but frequency of the clock. But the frequency of the lock is entangled to the critical path in functionality hardware.
I believe this is much more specific.


Not really... Are you after the highest possible clock rate? Or the highest possible cpu performance? System performance? Or something else? What is "computational speed", exactly?

Share this post


Link to post
Share on other sites
Speed is not a proper term. The most obvious example today is Intel vs. AMD chipsets, which achieve similar performance, but operate at completely different speeds.

This may sound like nit-picking, but is the crucial question here - how do you measure "speed" of a computer.

The usual benchmarks applicable to many cases are "instructions or operations per second". Even here, wide variety of completely opposite views exist (FLOPS, MIPS).

But this poses some bigger problems. Internet-connected computers are just one big computer. If you put all of that computation power together, you'd quickly break any records.

So the real question is not what the limits are, but what performance means.

At purely chipset level, limitations of physics have long since been encountered. Using light to manufacture chipsets needs to use higher and higher frequencies to use lower wavelengths. It is not uncommong for chipsets to dissipate over 100W. Increased clock frequencies limit the distance between units due to speed of light. Pathways are packed so closely together quantum effects of uncertainty are being noticed.

This is why multiple cores are being increasingly used, and parallel computing is becoming mainstream. "Speed" remains the same, but performance increases x-fold.

These are just technical obstacles, and manner in which they are solved only depends on business viability.

While this is merely theoretical debate (http://arxiv.org/abs/astro-ph/0404510&e=10129), the article claims that Moore's law can only hold true for 600 years (so, about 550 more years), before universe itself limits increase in information processing.

Share this post


Link to post
Share on other sites
I think he means in the context of his application (game or whatever), when will we reach a limit when we cannot get more performance/speed out of computer.

Is there an upperbound for the average desktop, so that to say it can execute a certain loop 100 times a second and not faster.

Share this post


Link to post
Share on other sites
According to Moore's Law, "our rate of technological development, the complexity of an integrated circuit, with respect to minimum component cost, will double in about 18 months."

Since Gordon Moore's observation in 1965, people have been saying that we must hit a plateau at some point where we just can't get any faster, but it's never come to pass. Even if we reach a speed in clock cycles where the signals between transistors just can't communicate any faster, there are still many options other than modern CPU architecture. We're already using hyperthreading and dual core processors. A light-based CPU prototype has already been built and is expected to be able to reach incredible speeds. Another concept (although I don't know how far along we are on it) is quantum computers which should allow instantanious processing.

IMO, there's no need to worry about hitting a computational ceiling. I'm pretty confident that, as long as there is technology, we'll find some way to improve it.

Share this post


Link to post
Share on other sites
Quote:
Original post by dawidjoubert
I think he means in the context of his application (game or whatever), when will we reach a limit when we cannot get more performance/speed out of computer.

Is there an upperbound for the average desktop, so that to say it can execute a certain loop 100 times a second and not faster.


This limit has been reached several times in game development.

First it was when terminal/text graphics - then came graphics cards.
Simple graphics then worked, but were slow - then came VESA/ModeX
People started dabbling in 3D but image quality was poor - then came 3D accelerators
With real-time polygon pushing no longer a problem, people improved art - then came shaders.
With photorealism just a matter of implementation, people are improving in-game worlds - physics accelerators are coming out these days.

It is extremly easy to reach the cap. Each units is performing only a very narrow task. And in order to expand beyond simple emulation, once it becomes viable, specialized hardware apears.

There is not upper bound in foreseeable future. The only thing that would create one, is the market, not technical limitations.

Share this post


Link to post
Share on other sites
Quote:
Original post by coderx75
Another concept (although I don't know how far along we are on it) is quantum computers which should allow instantanious processing.


Quantum computing isn't exactly faster than traditional computing. Quantum computers can solve certain select problems extremely quickly - but most of the pre and post processing for such algorithms has to be done by a traditional computer. I don't think that a quantum desktop computer is feasible or particularly useful.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
To supplement the mentions of optical interconnects I'll add that Intel is already one of the players in this field. They call their approach "silicon photonics". IIRC, they claim to have already produced a 1-Ghz optical CPU, though their engineers predict that the tech can be refined to yield 10-Ghz one.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement