This might be a dumb question but....i.e Assembly

Started by
2 comments, last by JohnBolton 17 years, 1 month ago
If a user upgrades to a different processor, would it ever benifit to recompile? I was asked this but I am not sure how to answer this. My first guess would be no because the program is written without the possibilites of any new features that a new processor might offer up. I almost am inclined to say it might function worse.
Advertisement
The likely answer is "no", because the same compiler on a new processor isnt going to make a bit of difference.
BUT Without a re-compile for that processor you are likely to see worse than optimal performance because the old code took into account
a different cache line size, different instruction timing, and didn't know about new instructions it could be using instead.

If you use a different compiler, or an updated compiler than you are going to see improvements.
New processors have their own quirks, and if the compiler knows about them than it can optimize for them. That is why you see
so many options on most compilers to optimize for p3, p4, intel, amd, amd64 ... ect. cause each processor needs special care
to get the code to run fast on it.

Note, that if the code is written with SSE1 support, the compiler probably cant make any changes to update this to SSE2.
If the programmer made a bunch of optimization for one processor by hand (inline _asm, special padding/allignments...) than
a recompile isnt going to fix that either.
the potential merits depend on a whole variety of factors:

- type of application (i.e. graphics/maths, parsing, data mining etc)
- difference/similarities between original processor and new one/s

in general, you should only consider recompiling a binary specifically for your new core, if you know that the program/application in question is already CPU-bound/limited, i.e. due to it being heavy on vector maths and such stuff, which would inevitably mean that possibly new features may not be properly leveraged by the executable compiled for the old core.

nonetheless, when it's about games, you need to take into account that many games are not necessarily CPU-limited, but rather GPU-bound in the first place.

In general, this is one of the many scenarios where it is obvious how you can easily and directly benefit from having a properly-modularized architecture/design in place, given that you could thus simply provide different modules (i.e. libraries/DLLs) for different backends.

That way, your application could query the host platform/architecture for its capabilities and then decide to load/use a certain DLL, that was compiled for that particular architecture, rather than using a more-generic implementation, which may not be as efficient on that platform, due to it having been compiled for a more generic platform (i.e. i385 rather than 585, or 586 rather than 686).

Building the corresponding DLLs is a merely a matter of using the proper build flags/settings, so it's usually really a no brainer, and will even enable you to provide your end users with credible profiling info, so that they THEMSELVES can assess whether or not it may make sense to upgrade their hardware/or use one particular version of your app.
It depends on how you configure the compiler. Keep in mind that the compiler targets the CPU you tell it to target, not the CPU it is running on.
John BoltonLocomotive Games (THQ)Current Project: Destroy All Humans (Wii). IN STORES NOW!

This topic is closed to new replies.

Advertisement