Jump to content

  • Log In with Google      Sign In   
  • Create Account


#Actualsamoth

Posted 29 December 2012 - 10:38 AM







Productivity, not language performance, is the key feature.

No this is not accurate. It depends on the application domain. For some projects performance is absolutely key. For others not so much.
Unluckily, this is exactly true. I don't like it any more than you do, but it is true. The biggest advantage of C++ over C (and C#/Java over C++) is that you can hire a mediocre programmer to do the same thing that a very expensive highly skilled programmer could do otherwise, and in 1/2 to 2/3 of the time.
Don't get me wrong, I'm not saying that C# programmers as such are inferior in any way. What I'm saying is that someone at considerably lower skill using C# can outperform someone at higher skill using C++ time- and cost-wise (replace C# with Java if you will). C# and Java come with huge standard libraries that are not only very complete, but also very easy to grok. Plus, automatic memory management.
That means that a programmer needs to have a lot less skill (and needs to use less time) to produce "something that works". Maybe not the best possible thing (this still requires someone with skill!), but something that works.

A lot of browser games are of sheer embarrassingly poor quality, and consume embarrassing amounts of resources to deliver something ridiculous in comparison. Who cares?
It takes a moderately skilled team 3 weeks to puke out something that sells. On the other hand, it takes a highly skilled team 3 years to produce something really good that also sells, but only 3 years later after all competitors have already sold theirs. From a business perspective, which one is better?

Quality or performance do not matter as much as you think. As long as it sells, all is good. Did you ever wonder why every incantation of [insert any software title] gets more bloated and slower without adding real value?

A WYSIWYG text processor / DTP used to fit on a floppy disk and run on a 8 MHz processor with 512kB of RAM in the mid-1980s. Written in C, on an operating system written in C, by the way. Computers at that time were entirely capable of performing well with C.

A program that does (apart from greatly improved but still nowhere near perfect spellchecking) exactly the same today runs on a computer with about 3000 times as much CPU power and about 8000 times as much main memory. And, it doesn't truly work "better" or faster in any observable way.
Such a program typically has a working set upwards of 100 MiB just for showing an empty window, reserves upwards of 300 MiB of address space, and takes anywhere from 300 to 900 MiB on your harddisk.

So what is the conclusion? Software companies deliberately produce bad software to force people into buying bigger and more expensive computers? Of course not.

It is just much, much better for business. As long as people keep buying, you're only reducing profit by doing better. The good horse only jumps as high as it needs to. It isn't worth hiring a team of highly skilled people for something a low-wage guy can do, even if that means it's 30% slower (as long as people still buy).
Moore's law [...]
Moore's law was initially a 10 year extrapolation of some observation made by an Intel founder based on (questionable) data. It however turned out being a very clever marketing strategy followed ever since, and that is all Moore's "Law" really is. Marketing.

C and C++ were very affordable on 15, 20, or 30 year old hardware, even with compilers of that time. A lot of very serious, good programs on the Atari ST and Amiga were written in GFA BASIC, which offered both a bytecode interpreter and a compiler. The performance of the GFA BASIC compiler was entirely sufficient for 99% of anything that you'd ever want to write at that time.

Every software running on the BeBox in the mid-90s was written using the Metrowerks C++ compiler (initially you had to cross-compile from Mac, what a joy!). Compared to today's compilers, MW C++ was embarrassingly poor. However, this was never an issue. Comparing my old dual-CPU 66MHz BeBox to my modern 4-core 2.8GHz Windows system, I see no substantial improvement in the "general feel" of most programs.
C is still basically the same language as it was in the 80s.
Well, yes and no. It is of course "basically" the same language, but that is true for C++ or Java too.

C has, over the years, gone a long way to make many things easier, more explicit and efficient, less ambiguous, and safer (headers like inttypes/stdint, restrict pointers, threading support, bounds checking, alignment, static assertions). In some way, if you compare C11 to, say, C89 or C90, or to K&R's original invention, it is "some completely different language".
The same is true for C++ (and probably Java, I wouldn't know... have not used Java since around 2003).
...it will become very clear to you that there is no possible way that a JIT compiled VM language (produced by your C# compiler) can be faster than native machine code
This a very obvious truth, which should be clear even without reading academic papers.

JIT compiled code may, in some situations, and depending on the programmer's skill, perform better. A poor C# programmer may easily be able to outperform a poor C programmer, simply because the C# standard library is well-optimized, and a poor C programmer might not be able to properly implement a competitive algorithm. However, the same is not true when comparing skilled programmers.

In the end, anything that comes out of a JIT compiler is executed as native machine code, so assuming proper input (i.e. equally skilled programmer) it can only ever be equally fast, never faster. However, other than a normal compiler, a JIT compiler has a very hefty constraint, namely it has to run in "almost realtime". The end user expects something to happen more or less instantly when launching a program. Nobody wants to wait a minute or two. Or ten. Caching does help, but only to some extent.

A normal optimizing compiler runs offline on the developer's machine, and this happens just once. It does not matter that much whether a release build runs in 15 seconds or 45 minutes or 4 hours (build times for non-release are a different story). It also doesn't really matter whether compiling takes 2 or 6 or 10 gigabytes of RAM, because the developer's machine will have that much -- the end user doesn't care.

Therefore, the compiler has a lot of opportunities and a lot of freedom in what it can do that a JIT simply cannot afford. With that in mind, JIT can, in general, not be faster than a normal compiler either. It just isn't realistic, no matter how clever JIT gets.

Think of playing chess against Anatoly Karpov, except Karpov only has 2 seconds for every move, and is allowed to look at only half of the board. You, on the other hand, can take any amount of time you like, use a chess computer, and may consult any amount of experts you want. He may be the best chess player in the world, but it is astronomically unlikely that he will win.

#3samoth

Posted 29 December 2012 - 10:37 AM





Productivity, not language performance, is the key feature.

No this is not accurate. It depends on the application domain. For some projects performance is absolutely key. For others not so much.

Unluckily, this is exactly true. I don't like it any more than you do, but it is true. The biggest advantage of C++ over C (and C#/Java over C++) is that you can hire a mediocre programmer to do the same thing that a very expensive highly skilled programmer could do otherwise, and in 1/2 to 2/3 of the time.
Don't get me wrong, I'm not saying that C# programmers as such are inferior in any way. What I'm saying is that someone at considerably lower skill using C# can outperform someone at higher skill using C++ time- and cost-wise (replace C# with Java if you will). C# and Java come with huge standard libraries that are not only very complete, but also very easy to grok. Plus, automatic memory management.
That means that a programmer needs to have a lot less skill (and needs to use less time) to produce "something that works". Maybe not the best possible thing (this still requires someone with skill!), but something that works.

A lot of browser games are of sheer embarrassingly poor quality, and consume embarrassing amounts of resources to deliver something ridiculous in comparison. Who cares?
It takes a moderately skilled team 3 weeks to puke out something that sells. On the other hand, it takes a highly skilled team 3 years to produce something really good that also sells, but only 3 years later after all competitors have already sold theirs. From a business perspective, which one is better?

Quality or performance do not matter as much as you think. As long as it sells, all is good. Did you ever wonder why every incantation of gets more bloated and slower without adding real value?

A WYSIWYG text processor / DTP used to fit on a floppy disk and run on a 8 MHz processor with 512kB of RAM in the mid-1980s. Written in C, on an operating system written in C, by the way. Computers at that time were entirely capable of performing well with C.

A program that does (apart from greatly improved but still nowhere near perfect spellchecking) exactly the same today runs on a computer with about 3000 times as much CPU power and about 8000 times as much main memory. And, it doesn't truly work "better" or faster in any observable way.
Such a program typically has a working set upwards of 100 MiB just for showing an empty window, reserves upwards of 300 MiB of address space, and takes anywhere from 300 to 900 MiB on your harddisk.

So what is the conclusion? Software companies deliberately produce bad software to force people into buying bigger and more expensive computers? Of course not.

It is just much, much better for business. As long as people keep buying, you're only reducing profit by doing better. The good horse only jumps as high as it needs to. It isn't worth hiring a team of highly skilled people for something a low-wage guy can do, even if that means it's 30% slower (as long as people still buy).

Moore's law [...]

Moore's law was initially a 10 year extrapolation of some observation made by an Intel founder based on (questionable) data. It however turned out being a very clever marketing strategy followed ever since, and that is all Moore's "Law" really is. Marketing.

C and C++ were very affordable on 15, 20, or 30 year old hardware, even with compilers of that time. A lot of very serious, good programs on the Atari ST and Amiga were written in GFA BASIC, which offered both a bytecode interpreter and a compiler. The performance of the GFA BASIC compiler was entirely sufficient for 99% of anything that you'd ever want to write at that time.

Every software running on the BeBox in the mid-90s was written using the Metrowerks C++ compiler (initially you had to cross-compile from Mac, what a joy!). Compared to today's compilers, MW C++ was embarrassingly poor. However, this was never an issue. Comparing my old dual-CPU 66MHz BeBox to my modern 4-core 2.8GHz Windows system, I see no substantial improvement in the "general feel" of most programs.

C is still basically the same language as it was in the 80s.

Well, yes and no. It is of course "basically" the same language, but that is true for C++ or Java too.

C has, over the years, gone a long way to make many things easier, more explicit and efficient, less ambiguous, and safer (headers like inttypes/stdint, restrict pointers, threading support, bounds checking, alignment, static assertions). In some way, if you compare C11 to, say, C89 or C90, or to K&R's original invention, it is "some completely different language".
The same is true for C++ (and probably Java, I wouldn't know... have not used Java since around 2003).

...it will become very clear to you that there is no possible way that a JIT compiled VM language (produced by your C# compiler) can be faster than native machine code

This a very obvious truth, which should be clear even without reading academic papers.

JIT compiled code may, in some situations, and depending on the programmer's skill, perform better. A poor C# programmer may easily be able to outperform a poor C programmer, simply because the C# standard library is well-optimized, and a poor C programmer might not be able to properly implement a competitive algorithm. However, the same is not true when comparing skilled programmers.

In the end, anything that comes out of a JIT compiler is executed as native machine code, so assuming proper input (i.e. equally skilled programmer) it can only ever be equally fast, never faster. However, other than a normal compiler, a JIT compiler has a very hefty constraint, namely it has to run in "almost realtime". The end user expects something to happen more or less instantly when launching a program. Nobody wants to wait a minute or two. Or ten. Caching does help, but only to some extent.

A normal optimizing compiler runs offline on the developer's machine, and this happens just once. It does not matter that much whether a release build runs in 15 seconds or 45 minutes or 4 hours (build times for non-release are a different story). It also doesn't really matter whether compiling takes 2 or 6 or 10 gigabytes of RAM, because the developer's machine will have that much -- the end user doesn't care.

Therefore, the compiler has a lot of opportunities and a lot of freedom in what it can do that a JIT simply cannot afford. With that in mind, JIT can, in general, not be faster than a normal compiler either. It just isn't realistic, no matter how clever JIT gets.

Think of playing chess against Anatoly Karpov, except Karpov only has 2 seconds for every move, and is allowed to look at only half of the board. You, on the other hand, can take any amount of time you like, use a chess computer, and may consult any amount of experts you want. He may be the best chess player in the world, but it is astronomically unlikely that he will win.


#2samoth

Posted 29 December 2012 - 10:36 AM



Productivity, not language performance, is the key feature.

No this is not accurate. It depends on the application domain. For some projects performance is absolutely key. For others not so much.

Unluckily, this is exactly true. I don't like it any more than you do, but it is true. The biggest advantage of C++ over C (and C#/Java over C++) is that you can hire a mediocre programmer to do the same thing that a very expensive highly skilled programmer could do otherwise, and in 1/2 to 2/3 of the time.

Don't get me wrong, I'm not saying that C# programmers as such is inferior in any way. What I'm saying is that someone at considerably lower skill using C# can outperform someone at higher skill using C++ time- and cost-wise (replace C# with Java if you will). C# and Java come with huge standard libraries that are not only very complete, but also very easy to grok. Plus, automatic memory management.

That means that a programmer needs to have a lot less skill (and uses less time) to produce "something that works". Maybe not the best possible thing (this still requires someone with skill!), but something that works.

A lot of browser games are of sheer embarrassingly poor quality, and consume embarrassing amounts of resources to deliver something ridiculous in comparison. Who cares?
It takes a moderately skilled team 3 weeks to puke out something that sells. On the other hand, it takes a highly skilled team 3 years to produce something really good that also sells, but only 3 years later after all competitors have already sold theirs. From a business perspective, which one is better?

Quality or performance do not matter as much as you think. As long as it sells, all is good. Did you ever wonder why every incantation of gets more bloated and slower without adding real value?

A WYSIWYG text processor / DTP used to fit on a floppy disk and run on a 8 MHz processor with 512kB of RAM in the mid-1980s. Written in C, on an operating system written in C, by the way. Computers at that time were entirely capable of performing well with C.

A program that does (apart from greatly improved but still nowhere near perfect spellchecking) exactly the same today runs on a computer with about 3000 times as much CPU power and about 8000 times as much main memory. And, it doesn't truly work "better" or faster in any observable way.
Such a program typically has a working set upwards of 100 MiB just for showing an empty window, reserves upwards of 300 MiB of address space, and takes anywhere from 300 to 900 MiB on your harddisk.

So what is the conclusion? Software companies deliberately produce bad software to force people into buying bigger and more expensive computers? Of course not.

It is just much, much better for business. As long as people keep buying, you're only reducing profit by doing better. The good horse only jumps as high as it needs to. It isn't worth hiring a team of highly skilled people for something a low-wage guy can do, even if that means it's 30% slower (as long as people still buy).
 

Moore's law [...]

Moore's law was initially a 10 year extrapolation of some observation made by an Intel founder based on (questionable) data. It however turned out being a very clever marketing strategy followed ever since, and that is all Moore's "Law" really is. Marketing.

C and C++ were very affordable on 15, 20, or 30 year old hardware, even with compilers of that time. A lot of very serious, good programs on the Atari ST and Amiga were written in GFA BASIC, which offered both a bytecode interpreter and a compiler. The performance of the GFA BASIC compiler was entirely sufficient for 99% of anything that you'd ever want to write at that time.

Every software running on the BeBox in the mid-90s was written using the Metrowerks C++ compiler (initially you had to cross-compile from Mac, what a joy!). Compared to today's compilers, MW C++ was embarrassingly poor. However, this was never an issue. Comparing my old dual-CPU 66MHz BeBox to my modern 4-core 2.8GHz Windows system, I see no substantial improvement in the "general feel" of most programs.
 

C is still basically the same language as it was in the 80s.

Well, yes and no. It is of course "basically" the same language, but that is true for C++ or Java too.

C has, over the years, gone a long way to make many things easier, more explicit and efficient, less ambiguous, and safer (headers like inttypes/stdint, restrict pointers, threading support, bounds checking, alignment, static assertions). In some way, if you compare C11 to, say, C89 or C90, or to K&R's original invention, it is "some completely different language".
The same is true for C++ (and probably Java, I wouldn't know... have not used Java since around 2003).

...it will become very clear to you that there is no possible way that a JIT compiled VM language (produced by your C# compiler) can be faster than native machine code

This a very obvious truth, which should be clear even without reading academic papers.

JIT compiled code may, in some situations, and depending on the programmer's skill, perform better. A poor C# programmer may easily be able to outperform a poor C programmer, simply because the C# standard library is well-optimized, and a poor C programmer might not be able to properly implement a competitive algorithm. However, the same is not true when comparing skilled programmers.

In the end, anything that comes out of a JIT compiler is executed as native machine code, so assuming proper input (i.e. equally skilled programmer) it can only ever be equally fast, never faster. However, other than a normal compiler, a JIT compiler has a very hefty constraint, namely it has to run in "almost realtime". The end user expects something to happen more or less instantly when launching a program. Nobody wants to wait a minute or two. Or ten. Caching does help, but only to some extent.

A normal optimizing compiler runs offline on the developer's machine, and this happens just once. It does not matter that much whether a release build runs in 15 seconds or 45 minutes or 4 hours (build times for non-release are a different story). It also doesn't really matter whether compiling takes 2 or 6 or 10 gigabytes of RAM, because the developer's machine will have that much -- the end user doesn't care.

Therefore, the compiler has a lot of opportunities and a lot of freedom in what it can do that a JIT simply cannot afford. With that in mind, JIT can, in general, not be faster than a normal compiler either. It just isn't realistic, no matter how clever JIT gets.

Think of playing chess against Anatoly Karpov, except Karpov only has 2 seconds for every move, and is allowed to look at only half of the board. You, on the other hand, can take any amount of time you like, use a chess computer, and may consult any amount of experts you want. He may be the best chess player in the world, but it is astronomically unlikely that he will win.

 


#1samoth

Posted 29 December 2012 - 09:57 AM

Productivity, not language performance, is the key feature.

No this is not accurate. It depends on the application domain. For some projects performance is absolutely key. For others not so much.

Unluckily, this is exactly true. I don't like it any more than you do, but it is true. The biggest advantage of C++ over C (and C#/Java over C++) is that you can hire a mediocre programmer to do the same thing that a very expensive highly skilled programmer could do otherwise, and in 1/2 to 2/3 of the time.

A lot of browser games are of sheer embarrassingly poor quality, and consume embarrassing amounts of resources to deliver something ridiculous in comparison. Who cares?

It takes a moderately skilled team 3 weeks to puke out something that sells. On the other hand, it takes a highly skilled team 3 years to produce something really good that also sells, but only 3 years later after all competitors have already sold theirs. From a business perspective, which one is better?

Quality or performance do not matter as much as you think. As long as it sells, all is good. Did you ever wonder why every incantation of <insert any sofware title> gets more bloated and slower without adding real value?

A WYSIWYG text processor / DTP used to fit on a floppy disk and run on a 8 MHz processor with 512kB of  RAM in the mid-1980s. Written in C, on an operating system written in C, by the way. Computers at that time were entirely capable of performing well with C.

A program that does (apart from greatly improved but still nowhere near perfect spellchecking) exactly the same today runs on a computer with about 3000 times as much CPU power and about 8000 times as much main memory. And, it doesn't truly work "better" or faster in any observable way.
Such a program typically has a working set upwards of 100 MiB just for showing an empty window, reserves upwards of 300 MiB of address space, and takes anywhere from 300 to 900 MiB on your harddisk.

So what is the conclusion? Software companies deliberately produce bad software to force people into buying bigger and more expensive computers? Of course not.

 

It is just much, much better for business. As long as people keep buying, you're only reducing profit by doing better. The good horse only jumps as high as it needs to. It isn't worth hiring a team of highly skilled people for something a low-wage guy can do, even if that means it's 30% slower (as long as people still buy).
 

Moore's law [...]

Moore's law was initially a 10 year extrapolation of some observation made by an Intel founder based on (questionable) data. It however turned out being a very clever marketing strategy followed ever since, and that is all Moore's "Law" really is. Marketing.

C and C++ were very affordable on 15, 20, or 30 year old hardware, even with compilers of that time. A lot of very serious, good programs on the Atari ST and Amiga were written in GFA BASIC, which offered both a bytecode interpreter and a compiler. The performance of the GFA BASIC compiler was entirely sufficient for 99% of anything that you'd ever want to write at that time.


Every software running on the BeBox in the mid-90s was written using the Metrowerks C++ compiler (initially you had to cross-compile from Mac, what a joy!). Compared to today's compilers, MW C++ was embarrassingly poor. However, this was never an issue. Comparing my old dual-CPU 66MHz BeBox to my modern 4-core 2.8GHz Windows system, I see no substantial improvement in the "general feel" of most programs.

 

C is still basically the same language as it was in the 80s.

Well, yes and no. It is of course "basically" the same language, but that is true for C++ or Java too.

C has, over the years, gone a long way to make many things easier, more explicit and efficient, less ambiguous, and safer (headers like inttypes/stdint, restrict pointers, threading support, bounds checking, alignment, static assertions). In some way, if you compare C11 to, say, C89 or C90, or to K&R's original invention, it is "some completely different language".

The same is true for C++ (and probably Java, I wouldn't know... have not used Java since around 2003).
 

...it will become very clear to you that there is no possible way that a JIT compiled VM language (produced by your C# compiler) can be faster than native machine code

This a very obvious truth, which should be clear even without reading academic papers.

JIT compiled code may, in some situations, and depending on the programmer's skill, perform better. A poor C# programmer may easily be able to outperform a poor C programmer, simply because the C# standard library is well-optimized, and a poor C programmer might not be able to properly implement a competitive algorithm. However, the same is not true when comparing skilled programmers.

 

In the end, anything that comes out of a JIT compiler is executed as native machine code, so assuming proper input (i.e. equally skilled programmer) it can only ever be equally fast, never faster. However, other than a normal compiler, a JIT compiler has a very hefty constraint, namely it has to run in "almost realtime". The end user expects something to happen more or less instantly when launching a program. Nobody wants to wait a minute or two. Or ten. Caching does help, but only to some extent.


A normal optimizing compiler runs offline on the developer's machine, and this happens just once. It does not matter that much whether a release build runs in 15 seconds or 45 minutes or 4 hours (build times for non-release are a different story). It also doesn't really matter whether compiling takes 2 or 6 or 10 gigabytes of RAM, because the developer's machine will have that much -- the end user doesn't care.


Therefore, the compiler has a lot of opportunities and a lot of freedom in what it can do that a JIT simply cannot afford. With that in mind, JIT can, in general, not be faster than a normal compiler either. It just isn't realistic, no matter how clever JIT gets.

 

Think of playing chess against Anatoly Karpov, except Karpov only has 2 seconds for every move, and is allowed to look at only half of the board. You, on the other hand, can take any amount of time you like, use a chess computer, and may consult any amount of experts you want. He may be the best chess player in the world, but it is astronomically unlikely that he will win.


PARTNERS