What am I Looking for in a computer

Started by
41 comments, last by Juliean 2 months ago

Btw, to the original poster, This discussion above is unrelated to you completely.
I think @juliean wanted to widen my perspective, by discussing different layers below C language.

None

Advertisement

You are making too much assumption about me here.

This was not specifically targeted at you. It was more of a statement of “if someone only know c++, and then learns C”. You might know all the things I talked about, but someone else might not. So the process of learning C for that person will give hime some more insights, but farm from the full picture. Since you were mentioning that you told that to other people.

This should be a C++ thingy (I have read c99 spec, I don't recall such a thing). And according to cppref:

Might very well be. My more general point was that the C-compiler is allowed to optimize and transform the code as it sees fit, under it's restrictions (which is usually dictated by the standard, but generally follows “as-if” rulesets). So if allocation-ellisions is out of the question, then it will still be able to eliminate locals, hoist them into registers… at least I strongly assume, if it wants to get the best performance.

worked professionally with python, I built the source code with optimization and I can confirm it is on the order of 100x to 200x slower by default (will vary depending on the program but that is like the default state). Using Default Compiler optimization, like -O3 alone on clang, will very probably use SIMD and will make the jump to 1000x probable and not an overstatement (again will vary, but I have seen it multiple times). I don't consider JIT to be part of python, because as far as I know (up till last time I worked with it) Python installer does not, and never had a JIT in it.

Oh yes, 100x slower or something for python, I definately believe. Might be pendantic, but I see a stark difference between 100x and 1000x slower. But python, as well as ruby, lua or any other text-interpreted language are still the extreme outliers, is my point. If we just look at random comparisons:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/index.htmlThis​

This seems to generally agree with your numbers, but my point: Java being 2 seconds slower here wouldn't really matter. C# being slower by 1.8x is considerable, but also won't matter for a lot of situations. As I said, I don't personally disagree that C/C++ will be the best choices for performance, I'm just putting into perspective the process of whatever or not that performance will realistically be of concern to someone. Even something like the 100x of Python might not matter to certain types of programs, though I personally care too much about performance to go that deep. I was mainly aiming at Java, C#, all that matter.

AliAbdulKareem said:
I did, But even back at University(EE), first year was C++, second year was C, third year was assembly.

That's actually reasonable. Starting to learn with a higher level, then going into the details, especially academically, does make sense. The university I went to only tought C++, but it was more of a practical instituation (Fachhochschule in german). They recently changed to starting with C#, and then going into C++. Makes more sense to me than the other way around.

AliAbdulKareem said:
While I have heard of such claims, by Chandler cppcon talks, my limited experience with CSharp specifically showed me otherwise. As simple as copying RGB images, CSharp (even with fixed, and unsafe) don't come close to C, we are talking at least an order of magnitude here. (Inside Unity at least, default CSharp copying was much slower, I ended up coding it in C and loading it as a DLL). I am not saying you are wrong, but this will probably requires more tuning than it worth.

My own experience with those languages and performance is also limited, that's why I'm relying on ther claims of such people. Unity is not the best comparison for C#, btw. Unity has a very specified implementation of C#, I think at one point they even said they didn't do inlining. Not to say that assessing it there is unwarranted - Unity is the largest userbase for C#, after all. But it's not really giving the full picture of the language. Again, I don't know all those things in detail. I just know that Java is used alongside C for things like micro-trading, where every nanosecond matters, so there ough to be some viability for both.

AliAbdulKareem said:
I am not seeing your point exactly about executing newer instructions, that sounds to me like a trade-off you definitely don't want to make for performance. So, if it may (or may not) execute faster on newer CPUs, it means it will definitely execute slower on older CPUs (if your target are mostly on newer CPUs just make that the default). Usually you want a grantee about standard/worse case scenario, rather than maximum/lucky scenarios.

I'm talking specifically about JIT. A JIT compiles the code directly on the users PC. They know which CPU is currently running. They can select the highest instruction-set that is available. Intel has come out with an APX-instruction on their newest CPU, that provides things like 64-bit direct-call instructions. If you compile for C and want to use those instructions, you have to eigther make a tradeoff, like you mentioned, or specifically force the code to only run on those CPUs. JIT-compiles languages can make that decision at runtime. “I'm running on 13900KF, let me select the 64-bit call. I'm not? Let's use RIP-relative addressing”. This guarantees the highest possible performance on eigther, without any tradeoff. It can generate the best possible code for all CPUs - just think of back when SSE was still not on all CPUs, but now it's standardized. JIT can, potentially, do the same thing much earlier before it becomes the de-facto standard.

This topic is closed to new replies.

Advertisement