Jump to content

  • Log In with Google      Sign In   
  • Create Account


#Actualcr88192

Posted 26 March 2013 - 02:26 PM

defining the sizes of various numeric types (for example: "char" and "unsigned char" are 8 bits, "short" and "unsigned short" are 16-bits, ...);

C has addressed that with the sized types, like int_32t.


The "old style" / K&R style function declarations have been marked as obsolete since at least 1999, and many current compilers don't accept them any more.
 
As always though, the C99 spec mentions that they didn't forbid them outright for fear of breaking people's code. C++11 has a similar amount of hoop-jumping, because an updated standard that breaks old code is basically just a new language, not an update.
 
actually, AFAIK, they are still present in C11 as well, and are still required AFAIK.
 
I would otherwise assume maybe they could be demoted to "optional" features, since it makes little sense to have a feature as required that only some compilers bother to support (and, likewise, compilers that actually see reason to still support them, can still do so).
 
like, if VLA's were demoted to an optional feature, why not K&R style declarations?...
 
part of the controversy seems to be that some people see it like if the old-style declaration syntax were dropped, support for "()" style empty argument lists would also need to be dropped. I personally think it more sensible to drop the old-style declarations, but keep "()" as it is.
 
 
the issue with integer sizes (for core integer types) is that a lot of code assumes their sizes, and wouldn't likely work correctly on targets where the sizes differ.
yes, this is more of a profile issue than a legacy issue.
 

all this doesn't even need to be "the standard", but maybe could be a "standardized profile".

Yeah it would defeat the purpose of the whole C language to nail down all of those things, which are basically hardware details, as then the language couldn't be ported to other types of hardware. Having sub-standards /profiles for certain platforms, e.g. "C for x86" makes a bit more sense though wink.png

 

if you don't nail down endianess, then the "profile" could apply equally to x86, PowerPC, and ARM.

 

if endianess is specified (say, as LE), then it is mostly x86 and most ARM devices.

IIRC, both PPC and ARM are bi-endian, but typically people run ARM devices in little-endian mode, and PPC in big-endian mode.

 

if it gets more specific, like whether or not unaligned load/stores are allowed, ... then it is probably target specific.

 

 

as for endianess, maybe 95% of the time, there is not much reason to care, and in the few cases where there is reason to care, it may make sense to have some way to specify it, and have the compiler emulate it if needed (like, say, if we say "this value needs to be LE or BE", then likely, the code has already agreed to pay for any requisite byte-swapping on loads/stores).

 

per-structure or per-value indication could be more useful than specifying it globally though, where the global endianess could still be left unspecified.

 

then, one can have a structure or similar, and be able to say that, with a compiler implementing this profile, and with a structure following the relevant rules, it is possible to know the exact byte-for-byte layout of the structure.

 

 

such a profile, if it existed, could still be "reasonably" portable within a range of targets, and possibly any points of disagreement could be emulated.

this would then make it a little more like Java or C# in these regards.

 

granted, this would be N/A for some targets, but these targets need not implement this profile.


#1cr88192

Posted 26 March 2013 - 02:22 PM

defining the sizes of various numeric types (for example: "char" and "unsigned char" are 8 bits, "short" and "unsigned short" are 16-bits, ...);

C has addressed that with the sized types, like int_32t.


The "old style" / K&R style function declarations have been marked as obsolete since at least 1999, and many current compilers don't accept them any more.
 
As always though, the C99 spec mentions that they didn't forbid them outright for fear of breaking people's code. C++11 has a similar amount of hoop-jumping, because an updated standard that breaks old code is basically just a new language, not an update.
 
actually, AFAIK, they are still present in C11 as well, and are still required AFAIK.
 
I would otherwise assume maybe they could be demoted to "optional" features, since it makes little sense to have a feature as required that only some compilers bother to support (and, likewise, compilers that actually see reason to still support them, can still do so).
 
like, if VLA's were demoted to an optional feature, why not K&R style declarations?...
 
part of the controversy seems to be that some people see it like if the old-style declaration syntax were dropped, support for "()" style empty argument lists would also need to be dropped. I personally think it more sensible to drop the old-style declarations, but keep "()" as it is.
 
 
the issue with integer sizes (for core integer types) is that a lot of code assumes their sizes, and wouldn't likely work correctly on targets where the sizes differ.
yes, this is more of a profile issue than a legacy issue.
 

all this doesn't even need to be "the standard", but maybe could be a "standardized profile".

Yeah it would defeat the purpose of the whole C language to nail down all of those things, which are basically hardware details, as then the language couldn't be ported to other types of hardware. Having sub-standards /profiles for certain platforms, e.g. "C for x86" makes a bit more sense though wink.png

 

if you don't nail down endianess, then the "profile" could apply equally to x86, PowerPC, and ARM.

 

if endianess is specified (as LE), then it is mostly x86 and most ARM devices.

IIRC, both PPC and ARM are bi-endian, but typically people run ARM devices in little-endian mode, and PPC in big-endian mode.

 

if it gets more specific, like whether or not unaligned load/stores are allowed, ... then it is probably target specific.

 

 

as for endianess, maybe 95% of the time, there is not much reason to care, and in the few cases where there is reason to care, it may make sense to have some way to specify it, and have the compiler emulate it if needed (like, say, if we say "this value needs to be LE or BE", then likely, the code has already agreed to pay for any requisite byte-swapping on loads/stores).

 

then, one can have a structure or similar, and be able to say that, with a compiler implementing this profile, and with a structure following the relevant rules, it is possible to know the exact byte-for-byte layout of the structure.

 

 

such a profile, if it existed, could still be "reasonably" portable within a range of targets, and possibly any points of disagreement could be emulated.

this would then make it a little more like Java or C# in these regards.

 

granted, this would be N/A for some targets, but these targets need not implement this profile.


PARTNERS