C++ Common Runtime

Started by
14 comments, last by Bregma 5 years, 8 months ago

I don't agree with that statement either, C++ is way more than just "squeeze the last bit of performance out of a specific platform". I code C++ for a long way now and didn't need to use those tricks like intrinsics or even inline-assembly when writing games and agem engines for years. This is because code is already being optimized by good compiler backends like LLVM.

Best example here is Matrix Math, I wrote matrix 4x4 multiplication in clean C++ without explicitely use of SSE instructions. How to say, performance was bad in MSVC but even beats the intrinsic version in LLVM because LLVM is already clever enougth to detect what I want the code to do and let the code be as optimized as possible.

Other points I like to work with C++ way more as with C# for example are totally freedom to do anything with your data and template model. C++ isn't interested in how your data is formatted, have a range of memory in hand use it as byte-Array, integer-Array, struct or whatever isn't limitted while other languages especially CLR complains about storing an Int16 in an Int32. The freedom to manage and waste or don't your memory, the freedom of writing static global functions or fully OOP kind classes and last but not least the way more supported template and preprocessor capabilities. This is what makes C++ unique in my opinion!

To get that point clear, I'm not in having run 20 year old software on modern hardware without any tweak or recompilation but this was a theoretical general question about how could this be possible in future development.

And yes, I already thought about something like precompilation translated code and about (already described above) to keep those loading time aware. My solution would be whatever modern compiler do after the precompiling step before assembling this into architecture specific code. I don't think that those code will run significant slower than native cross-compiled code for that architecture and one will however need to use preprocessor flags to determine platform specific function calls anyways in cross-platform development so this isn't a real statement against a CR :)

Advertisement
24 minutes ago, Shaarigan said:

Other points I like to work with C++ way more as with C# for example are totally freedom to do anything with your data and template model. C++ isn't interested in how your data is formatted, have a range of memory in hand use it as byte-Array, integer-Array, struct or whatever isn't limitted while other languages especially CLR complains about storing an Int16 in an Int32. The freedom to manage and waste or don't your memory, the freedom of writing static global functions or fully OOP kind classes and last but not least the way more supported template and preprocessor capabilities. This is what makes C++ unique in my opinion!

This is pretty much what I was referring to really. :) While I'm sure there are instances where these things make it easier to express high level behavior/logic, my impression is that most people are using this to ensure certain low level behavior (ie make a more efficient implementation, tailored to your particular needs at the time).

To get back to the original question however.. Even if you can compile your code to some intermediate format (like LLVM apparently supports), you are only solving half the problem. You also need to ensure that certain features exist on the target platform, and that the way to interact with these stay the same. If you take something like .NET, a part of it is the JIT compiler, but do not underestimate the importance of the framework libraries that are defined for the platform. For a game, this could for example be OpenGL/DirectX, or for older games, some way to acquire a pointer to the frame buffer shown on the screen. And even if you get that pointer, will the image format remain the same? Will the resolutions etc? Those are the true problem with cross platform code imo, not cross-compiling/jit-compiling/translating the machine code.

Well yes, one of the big obstruccle here is of course the system call/system library. Different OS has a different mechanism so you have abstract this away in a portable runtime.

http://9tawan.net/en/

You have those problems in C#/.NET too so no unexplored landscapes here :)

In C# I usually do platform detection or abstract interface binding, then load matching classes (some kind of late binding happens in the CLR), so supporting post-precompile postprocessor directives (word of the day, really!) or late-binding (another extended form of what the OS under the hood does with dynamic libraries) should be no problem for the kind of IL language. There is always a way to solve that.

Also handling with OpenGL, you will anyways have to bind function pointers to library functions if not stuck at 2.0 immediate mode to get all those new (relative) and fancy extension methods from those versions beyond GL2.0.

I think anything needed is a good platform anchored assembly initializer and a library with minimal set of classes and functions to support those basic features .NET has. Maybe worh a thought in my mind if I have space for a new project :D

Interestingly, p-code systems like this were first implemented in the 1960s and OSes based on it became one of the competing alternatives to native compilation in the heady early days of the microcomputer (Microsoft's CP/M clone MS-DOS won that competition, but for non-technical reasons).  The solution for the platform-portability problem has been resolved in recent implementations, such as Google's Go, by statically embedding the OS runtime in the application at final compile time so they can run alongside native applications on non-pcode OSes.

 

Stephen M. Webb
Professional Free Software Developer

This topic is closed to new replies.

Advertisement