• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
DevLiquidKnight

Moores Law

17 posts in this topic

How do you suspect Moore's law will be mitigated, in the event that it begins to slow? Some information I read online suggests that I may already be slowing and quantum tunneling offers a large hurdle.

0

Share this post


Link to post
Share on other sites

Yes, we have generally reached a limit which means we won't be able to clock our processors any faster until they are made of something other than silicon. In the absolute worst case that we are no longer able to develop better technology, Moore's Law will degenerate into a "parallel processing" argument which states than we can double computational power by doubling the number of cores, which we won't be able to keep doing very long because of spatial constraints (since we can't make them any smaller either). And many applications do not scale linearly with computational execution units.

 

Fortunately, "moar power" isn't the only way to make stuff work faster. In fact, there are much more important factors, including memory access patterns, I/O dependencies, SIMD/MIMD and of course the odd algorithmic improvement. These make a huge difference in the running time of a computationally heavy workload, perhaps more so than just throwing more, faster cores at it. There are also a bunch of specialized tricks hardware manufacturers keep introducing to make common operations work that little bit faster (think horizontal add, fused multiply-add, etc..). I'm sure they have a few tricks up their sleeves for the next few years to compensate.

 

Though I think they are banking on mostly adding more cores to everything while keeping power usage steady, at least GPU's can afford to because most graphics rendering scales almost perfectly, CPU's don't have that luxury and can only do so much to speed up badly designed single-threaded programs. There will be a gradual paradigm change to parallel computing, we may even integrate GPU technology in our current CPU's to create a unified general-purpose parallel processor, and our development patterns will adapt accordingly. Perhaps "cloud computing" will really take off, perhaps it won't. This will take a long time, don't expect to see any of this for at least eight years and perhaps much longer.

 

But, in the limit, yes, from our current understanding of physics, Moore's Law will simply cease to apply until our technology is built from something other than matter. You can only double anything so many times before it becomes too large to handle (ever heard of the rice and chessboard anecdote?)

 

In any case, consumer CPU's do not need more power right at the moment. They are more than adequate. What really needs to be addressed is not at hardware but at software level.

0

Share this post


Link to post
Share on other sites

"Moores' Law" annoys me, mostly because when it's used by anyone (and, tbh, gamers are the worst for this) they seem to give it the same weight as the law of gravity in a 'this must happen!' sense... Should have been called 'Moore's observation of the past and projection for the future'.... *grumbles*

0

Share this post


Link to post
Share on other sites

"Moores' Law" annoys me, mostly because when it's used by anyone (and, tbh, gamers are the worst for this) they seem to give it the same weight as the law of gravity in a 'this must happen!' sense... Should have been called 'Moore's observation of the past and projection for the future'.... *grumbles*

 

This. Years ago, when people first started talking about it, my first thought was "bullshit." To the best of my remembrance, I've never used the term in any conversation, simply because I never believed it. I'm not any kind of genius prognosticator, but it doesn't really require one to be a genius to intuit that there will be hard limits on scaling and speed that Moore's so-called Law can't circumvent.

0

Share this post


Link to post
Share on other sites
It's not like Moore himself said that it would continue indefinitely. He wrote a paper saying transistor densities were doubling every two years in the 1960s and said it would continue for at least ten years. We hit the expiration date on his prediction around thirty years ago.
0

Share this post


Link to post
Share on other sites

I found this article interesting - the author argues that Moore's Law has already started slowing down, but that we can (and are, and inevitably will) take advantage of similar trends to keep pace in other ways. I don't have the requisite knowledge to critically examine the article - hardware ain't my field - so I'd be interested to hear what others think of it.

0

Share this post


Link to post
Share on other sites
It seems that everything will continue normally for at least 8 years, since 5 nm is on Intel's roadmap. It's hard to imagine anything much smaller than 5 nm being possible with technologies whose core component is photolithography...

IBM has been doing research on stacking multiple die and interconnecting them vertically by thousands of interconnects per square millimeter, with special cooling micro-channels taking care of the generated heat. With such an approach it might be possible to keep increasing the computational capability and probably also drive costs down by not replacing the manufacturing equipment as often.

Meanwhile, there is research in nanoelectronics, namely in methods to include trillions of transistors in a single chip. This paper (http://www.ee.washington.edu/faculty/hauck/publications/NanoSurvey.pdf) suggests that these trillions of devices will probably have to have a regular layout on the chip and that a large number of defective devices per chip will be the norm, so a method will be necessary to avoid using these defective devices.

Bacterius mentioned architectural and algorithmic improvements. True, a good choice of algorithm may make a program execute thousands of times faster, and the choice of the correct architecture might speed a programm up x times, but currently we have the luxury of BOTH architectural/algorithmic improvements and a nice 2x speedup every 20-24 months. If manufacturing technology had stopped evolving in 2007, we still wouldn't be able to play Crysis 1 on a cheap gaming PC! Edited by D_Tr
0

Share this post


Link to post
Share on other sites

"Moores' Law" annoys me, mostly because when it's used by anyone (and, tbh, gamers are the worst for this) they seem to give it the same weight as the law of gravity in a 'this must happen!' sense... Should have been called 'Moore's observation of the past and projection for the future'.... *grumbles*

A "law" is an observation though - a simple relation of observed behaviour. Not all scientific laws have the same weight as that of gravitation either, e.g., gas laws which are just an approximation. Perhaps it's unfortunate that people misunderstand what "law" is (same with "theory"), but that applies to all usages of the term.
0

Share this post


Link to post
Share on other sites
Since everything must be portable and wireless and green nowadays (bleh), it's likely that Moore's Law (which is [b]purely[/b] a marketing gag) will be replaced by some other bullshit. Such as CPUs will consume 30% less power every year. CPUs with integrated GPUs are another such marketing gag. [i]Some nonsense[/i] will be thought of to give you an incentive to buy a new processor, rest assured. Maybe the new SSE6.3 instructions, which are only supported on 0.5% of all CPUs and useful for 0.01% of all programs, and have different opcodes between Intel and AMD. And of course there will be support for SHA-3 instructions. GPU manufacturers are going the "low power" route already, though in this case I consent somehow (having to buy a bigger power supply unit because the GPU alone draws 300-400W is simply not acceptable). With Kepler, nVidia who probably has the longest history of ultra power-hungry hardware on the planet, for the first time brought out a new chip with a lot more silicon and only marginally better performance -- but with half the power consumption. When CPU speeds are maxed out, and more speed is desired, there's still memory bandwidth to be dealt with. I'm sure it is possible to make memory faster, there just hasn't been such a great interest in the past (because L2 cache works reasonably well). However, faster RAM obviously makes a faster computer, too.
0

Share this post


Link to post
Share on other sites
If moore's law slowed to a halt right now, but we still wanted faster, smaller PC's, then we'd just have to throw out all our software and start again -- this time teaching everyone multi-core practices from the get-go, so that we can make 100 90's era CPU cores onto a chip and actually have them be utilized (which was the idea behind larabe)

Hardware manufacturing is pretty foreign to me, but AFAIK, chip design is still mostly a 2D affair -- moore's law is about surface area, not volume. So, once we hit an area limit, it's time to start stacking vertically, and then racing to reduce the thickness of these layers too... ;|

Power usage, as mentioned above, is another factor that needs to be addressed. Moore's law has been both driving this up and down -- down when things are shrunk, then back up when more shrunken things are crammed into each chip. Biological computers (us) absolutely put computers to shame when it comes to power usage, so there's a lot of work that needs to continue here after moore's law fails.
0

Share this post


Link to post
Share on other sites

Moore only said that transistor counts double every 2-3 years, over the years its become synonymous with "processing power doubles every three years", which is really what the consumer cares about anyhow. There's a relationship between performance and number of transistors, sure, but no one cares about how many transistors their CPU sports.

 

To continue the single-threaded performance curve, I suspect that chip manufacturers will find ways to get around the slowing of moore's law. As we approach the limit of transitor shrinks on current technology, we'll just switch to new materials and processes, make bigger and bigger dies, bigger and bigger wafers, and move into '3-D' chips--stacking 2-D silicon dies one on top of the other. There's still hurdles there today, but they're far from insurmountable.

 

I also think that the transition to predominantly-multi-threaded, asynchronous programming can happen quite gradually. As those practices become more mainstream, a chip manufacturer can build future CPUs out of cores that are, for example, only half as fast, but consume only 25% of the area of previous cores -- thus, aggregate performance doubles without affecting area. As apps take advantage of this new way of doing things, they'll still seem twice as fast. That said, some problems are necessarily sequential, and so there will probably always be a place for high single-threaded performance. In the future, I can foresee a CPU that has 1-2 'big' cores, and 8-16 (or more) smaller cores -- all functionally identical, but the big ones chasing highest-possible IPC, and the little ones chasing smallest reasonable surface area. It would be best for OSes to know about these different kinds of CPUs, but you can even handle scheduling to the appropriate kind of CPU at a silicon level, and dynamically move between kinds of cores.

 

Another angle is stream-processing (GPGPU, et all) which accounts for a significant portion of our heaviest workloads. There, they've already figured out how to spread work across thousands of working units. If your problem can be solved in that computing model, we're already at a point where we can just throw more area at the problem, or spread it across multiple chips trivially.

 

The single-threaded performance wall is a hardware issue. The multi-threaded performance wall is a wetware (that's us) issue.

0

Share this post


Link to post
Share on other sites

we'd just have to throw out all our software and start again -- this time teaching everyone multi-core practices from the get-go, so that we can make 100 90's era CPU cores onto a chip and actually have them be utilized (which was the idea behind larabe)

The thing is, while the idea of throwing all the software away isn't workable the fact is that really "we" should be rethinking the way software engineering is taught.

There needs to be a push away from the 'OOP for everything!' mindset and towards one which highlights OOPs time to shine while also exposing people to functional programming styles and teaching them to think about data too so that people have a better understanding of the various ways problems can be solved and we don't get people trying to fill GPUs with C++ virtual monsters which burn resources doing vtable look ups when they really don't need it.

I admit I've been out of the educational circles for a while but if they are still turning out 'OOP are the bestestest!' "programmers" every year we have no chance of over coming the problem.
0

Share this post


Link to post
Share on other sites

So should more computer science majors be focused on multi-core/multi-processor programming is that is the primary way stuff will be heading in the next 20-40 years? Stuff such as distributed systems and concurrency control/parallel computation.

0

Share this post


Link to post
Share on other sites

The thing is, while the idea of throwing all the software away isn't workable the fact is that really "we" should be rethinking the way software engineering is taught.

There needs to be a push away from the 'OOP for everything!' mindset and towards one which highlights OOPs time to shine while also exposing people to functional programming styles and teaching them to think about data too so that people have a better understanding of the various ways problems can be solved and we don't get people trying to fill GPUs with C++ virtual monsters which burn resources doing vtable look ups when they really don't need it.

I admit I've been out of the educational circles for a while but if they are still turning out 'OOP are the bestestest!' "programmers" every year we have no chance of over coming the problem.

I can back this. More and more frequently OOP isn't the most optimal way to go about things. Being stuck in an OOP mindset can be very damaging in that regard.

0

Share this post


Link to post
Share on other sites

I think we should just throw some more information at the compilers so they can do the threading...

 

That means telling it which calls to 3rd party stuff are dependent/thread safe and define the different tools needed for threading (similiar to how you can override new)

 

Of course the programmer will always be able to have more information than the compiler so doing the threading manually might get performance benefits. But if the compiler does it, it will thread everything without mistakes if done correctly.

 

I believe there was an intel compiler (experiment?) for c++ that does just that. Not sure how extensively it could do it.

 

But this probably works well only for our general normal CPUs with a bunch of cores, because if we go to more fine grained parallization i would imagine that more changes would need to be made at the algorithmic level which likely isnt very easy of a job for compilers.

0

Share this post


Link to post
Share on other sites

I think we should just throw some more information at the compilers so they can do the threading...

In theory, this is one of the benefits of functional programming. Since the order in which things happen doesn't matter the compiler is free to rearrange the operations as it sees fit, including doing multithreading on its own if it feels like it. No idea how well do functional languages cope with this, though (at least current ones).

 

Of course the programmer will always be able to have more information than the compiler so doing the threading manually might get performance benefits. But if the compiler does it, it will thread everything without mistakes if done correctly.

This used to be the case with code optimization, where a human could generate better assembly than the compiler could. Then over time processors became a lot more complex and compilers became much more smarter, and most of the time a compiler is going to beat a human when it comes to optimization since it can see a lot more. I wouldn't be surprised if the same could be applied to making the compilers multithread the code.

0

Share this post


Link to post
Share on other sites

I think we should just throw some more information at the compilers so they can do the threading...

In theory, this is one of the benefits of functional programming. Since the order in which things happen doesn't matter the compiler is free to rearrange the operations as it sees fit, including doing multithreading on its own if it feels like it. No idea how well do functional languages cope with this, though (at least current ones).

 

 

>Of course the programmer will always be able to have more information than the compiler so doing the threading manually might get performance benefits. But if the compiler does it, it will thread everything without mistakes if done correctly.

This used to be the case with code optimization, where a human could generate better assembly than the compiler could. Then over time processors became a lot more complex and compilers became much more smarter, and most of the time a compiler is going to beat a human when it comes to optimization since it can see a lot more. I wouldn't be surprised if the same could be applied to making the compilers multithread the code.

 

I feel like unless it was JITed you would get situations where it might give multiple actions the same weight despite them running much more/less frequently than the compiler expects. The compiler would need more data about the expected use of the program along with the code you are giving it.

0

Share this post


Link to post
Share on other sites

I think we should just throw some more information at the compilers so they can do the threading...

Of course the programmer will always be able to have more information than the compiler so doing the threading manually might get performance benefits. But if the compiler does it, it will thread everything without mistakes if done correctly.
 
I believe there was an intel compiler (experiment?) for c++ that does just that. Not sure how extensively it could do it.

Unfortunately, the two bold bits are at odds with each other -- if we're supplying the extra info, then we can still make mistakes wink.png
 
Also, generic C++ isn't conducive to automatic concurrency -- not even on a single thread!
For example:

struct ArrayCopyCommand : public Command
{
  int* from;
  int* to;
  int count;
  virtual void Execute()
  {
    for( int i=0; i!=count; ++i )
      to[i] = from[i];
  }
};

The compiler will already try to pull apart this code into a graph of input->process->output chunks (the same graph that's required to generate fine-grained parallel code), in order to generate optimal single-threaded code. Often, the compiler will re-order your code, because a single-threaded CPU may be concurrent internally -- e.g. one instruction make take several cycles before it's result is ready, the compiler wants to move that instruction to be several cycles ahead of the instructions that depend on those results.

However, the C++ language makes this job very tough.
Take the above code, and use it like this:

ArrayCopyCommand test1, test2;
int data[] = { 1, 2, 3 };
test1.from = data;
test1.to = data+1;
test1.count = 2;
test1.Execute();
 
test2.from = data;
test2.to = &test2.count;
test2.count = 42;
test2.Execute();

In the case of test1, every iteration of the loop may in fact be dependent on the iteration that came before. This means that the compiler can't run that loop in parallel. Even if you had a fantastic compiler that can produce multi-core code, it has to run that loop in sequence in order to be compliant to the language.

In the case of test2, we can see that the loop body may actually change the value of count! This means that the compiler has to assume that the value of count is dependent on the loop body, and might change after each iteration, meaning it can't cache that value and has to re-load it from memory every iteration, again forcing the code to be sequential.

As is, that ArrayCopyCommand class cannot be made parallel, no matter how smart your compiler is, and any large C++ OOP project is going to be absolutely full of these kinds of road-blocks that stop it from being able to fully take advantage of current/future hardware.

 

To address these issues, it's again up to us programmers to be extremely good at our jobs, and write good code without making simple mistakes...

Or, if we don't feel like being hardcore C++ experts, we can instead use a language that is conducive to parallelism, like a functional language or a stream-processing language. For example, HLSL shaders look very much like C/C++, but they're designed in such a way that they can be run on thousands of cores, with very little room for programmers to create threading errors like race conditions...

Edited by Hodgman
0

Share this post


Link to post
Share on other sites

Moore's law will probably continue for a while longer. There have been demonstrated 7 atom transistors and single atom transistor. Quantum tunneling did offer a hurdle, but it's seeming like less of a hurdle nowadays from what I've read. I remember when Wikipedia stopped their fabrication pages at 11 nm since that was believed to be the smallest they could go. Then when new research was made they changed the page to 10 nm. Then when more research was done they added 7 nm and then 5 nm. Humans are notoriously bad at predicting the future.

 

As Hodgman said layers is the newer idea. Stacking transistors and building 3D networks. I think a core idea of this will be fast interconnects at that scale. Processors are already extremely tiny. Xeon Phi for instance is increasing the transistor counts by just moving away from the socket methods. We can easily double the number of transistors in a chip by adding two of them on a board. Keeping doing that and you end up with Xeon Phi's 62 cores which has 5 billion transistors per card at 22 nm. Intel's already designing for 10 nm. Once there it'll be two more steps then they'll be working with a handful of atoms.

 

One thing I think that is going to happen though in relation to processing speed is a huge drop in the number of simple instructions. A switch to a purely SIMD architecture with highly CISC instructions for specialized operations. (Stuff like multiply 1 number is the same cost as multiplying 16 at the same time). We're already seeing that with encryption and compression algorithms which have special instructions so that they run way faster than with the basic RISC set of instructions. I think at one point we'll start to see stuff like hundreds of matrix/quaternion operation instructions and we'll continue to see speed improvements possibly doubling for a while as pipelines are redesigned to speed things up. That or specialized FPGA instructions as transistor count would allow one to program their own cores on a processor for running calculations or whole algorithms. I digress. I don't think the limit if/when we reach it in our life-time will be a big deal. There are so many avenues to research that we probably won't see it in our life-time.
 

Edited by Sirisian
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0