Jump to content
  • Advertisement
Sign in to follow this  
HaywireGuy

Unity Why declare in reversed order for _WIN64?

This topic is 4885 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Good day people, I am seeing this in WinSock.h. But I'm not sure why do we have to "reverse" the declaration order if we're compiling it for _WIN64?   struct servent   {     #ifdef _WIN64       char    FAR * s_proto;       short   s_port;     #else       short   s_port;       char    FAR * s_proto;     #endif   }; Thanks in advance for any replies

Share this post


Link to post
Share on other sites
Advertisement
Structure alignment is usually done from highest size to lowest size in a descending order. This is done to attempt to keep structures aligned to whatever boundary the system desires ( 32bit 64bit, etc) and to reduce the amount of padding that would be added to a structure. Ptrs in Win64 are now going to be 8bytes, while the short will remain 2 bytes.


//Win32 struct
struct fine
{
int m_int; (4bytes no padding)
long m_long; (4bytes no padding)
int m_int2; (4bytes no padding)
};

// but in win64
struct bad
{
int m_int; (4bytes, 4bytes of padding (8 total bytes) )
long m_long; (8bytes, 0 padding)
int m_int; (4bytes, 4bytes of padding (8 total bytes) )
};

struct win64Good
{
long m_long; (8bytes no padding)
int m_int; (m_int and m_int2 fill up 8 consecutive bytes..no padding)
int m_int2;
};




[EDIT]
hmmm. Thats odd, now that I re-read your post, either way the char * would be bigger than the short.. There goes that theory =) Unless that is meant for backward compatibility for 16-bit systems too...
[EDIT]

Share this post


Link to post
Share on other sites
moeron is correct as to the desire to change. Now it breaks binary compatibility, but Win64 in native mode does anyway, which is why they can get away with it.
Don't see any difference byte order would make.

Share this post


Link to post
Share on other sites
Thanks guys,

This is the complete structure (I shouldn't have removed the first two
members). But even in this case padding of 6 bytes will still happen to "short
s_port" too, right? Unless it's better to have padding at the end of the
structure rather than somewhere in the middle.


    struct servent
    {
        char    FAR * s_name;
        char    FAR * FAR * s_aliases;
      #ifdef _WIN64
        char    FAR * s_proto;
        short   s_port;
      #else
        short   s_port;
        char    FAR * s_proto;
      #endif
    };

Share this post


Link to post
Share on other sites
Quote:
Original post by Jan Wassenberg
moeron is correct as to the desire to change. Now it breaks binary compatibility, but Win64 in native mode does anyway, which is why they can get away with it.
Don't see any difference byte order would make.
Yes, excellent answer. It's one of those things where someone didn't do it the best way to begin with. I get those a lot.

Share this post


Link to post
Share on other sites
Quote:
But even in this case padding of 6 bytes will still happen to "short
s_port" too, right?

Negative - padding is only strictly required before a data member. Sufficiently studly compilers could avoid adding padding to the end of a struct by checking alignment requirements of subsequent data.

iMalc:
Quote:
Yes, excellent answer. It's one of those things where someone didn't do it the best way to begin with. I get those a lot.

:) Hindsight is 20/20; once released, it's reasonable to set APIs in stone.
How do you mean "I get those a lot"?

Share this post


Link to post
Share on other sites
Thanks Jan, if I get your point correctly, the "bad" structure Moeron shown may
or may not have padding at the final "int" then? If no padding at the end were
to happen, the whole structure will be of size 8 + 8 + 4 = 20 bytes. That doesn't look
like an "optimized" size to me... Just wondering.


  struct bad
  {
    int  m_int1;
    long m_long;
    int  m_int2;  // No padding.
  };

Share this post


Link to post
Share on other sites
Exactly.
As to "optimized", it's a bit wasteful, yes. But the compiler cannot do anything about it; struct members must not be rearranged, since the coder may access them via pointer+direct offset.
This is why we must manually order from largest-smallest.

Share this post


Link to post
Share on other sites
Thanks Jan and everyone else, it's much clearer now. But I am still a bit
concerned about Microsoft doing that to their "globally" used header files. If
anyone of us is doing a direct offset into a structure, that code is going to
fail (though I know we should never do that).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!