Jump to content

  • Log In with Google      Sign In   
  • Create Account

Interested in a FREE copy of HTML5 game maker Construct 2?

We'll be giving away three Personal Edition licences in next Tuesday's GDNet Direct email newsletter!

Sign up from the right-hand sidebar on our homepage and read Tuesday's newsletter for details!


We're also offering banner ads on our site from just $5! 1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Bit-fields


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
9 replies to this topic

#1 BinaryPhysics   Members   -  Reputation: 294

Like
0Likes
Like

Posted 09 September 2012 - 09:41 AM

This has been bugging me since last night. It is really for abstract, educational purpose so I'm not after a "bit-fields are evil, do not use them" response.

As stated the question I have concerns bit-fields; as in:
[source lang="cpp"]/*** flags used to describe the state of a log file*/struct flags{ unsigned open : 1; unsigned redirected : 1; unsigned error : 1; unsigned : 5;};[/source]Now, logically (based on the explanations of bit-fields) the statment: [source lang="cpp"]sizeof (struct flags) == 1[/source] Should be true.

However, (using Visual Studio 2010) the size of the structure is actually 4 'bytes'. Further reading statement that bit-fields are highly implementation dependent in C. So I put the size down to that and moved on.

But now, grammar states that the 'sizeof' operator cannot be applied to bit-fields.

So this leaves me with a question: Is the operator actually returning the right value and bit-fields aren't actually implemented as the specification states, or is it the fact that the operator is wrong and I can take it on blind faith that the structure is the right 'wide'.

Other experiments (wider members) produce larger values for the size of the structure.

Edited by BinaryPhysics, 09 September 2012 - 09:44 AM.


Sponsor:

#2 Hodgman   Moderators   -  Reputation: 30926

Like
0Likes
Like

Posted 09 September 2012 - 09:54 AM

Try sizeof with struct flags { char data; }; and struct flags { char open : 1; }
It's probably implemented by making a struct with an 'unsigned' inside it, and then you're specifying meanings for 8 of the bits that make up the unsigned.

Edited by Hodgman, 09 September 2012 - 09:55 AM.


#3 BinaryPhysics   Members   -  Reputation: 294

Like
0Likes
Like

Posted 09 September 2012 - 09:59 AM

Try sizeof with struct flags { char data; }; and struct flags { char open : 1; }


The result is that they're both 1.

It's probably implemented by making a struct with an 'unsigned' inside it, and then you're specifying meanings for 8 of the bits that make up the unsigned.


That makes sense considering the size of an unsigned is 4. Does this mean I can't necessarily take it on faith? Is the size actually implementation dependent too?

Edited by BinaryPhysics, 09 September 2012 - 10:00 AM.


#4 Hodgman   Moderators   -  Reputation: 30926

Like
1Likes
Like

Posted 09 September 2012 - 10:13 AM

The fact that sizeof returns 4 means that the compiler assumes that when using that struct, you'll have allocated 4 bytes for it.
You could just allocate 1 byte and cast it to your struct, but for all you know, you'll read outside the bounds of your allocation by 3 bytes. You can test whether this is true or not, and if it does behave correctly, then your code is fine for that one compiler. You'll have to redo the tests whenever you change compilers.

If the char version works, why not just use that?
You could also add a static assertion, so that the code will be flagged as broken if some other compiler is incompatible, e.g. STATIC_ASSERT( sizeof(struct flags) == 1 );

Edited by Hodgman, 09 September 2012 - 10:13 AM.


#5 BinaryPhysics   Members   -  Reputation: 294

Like
0Likes
Like

Posted 09 September 2012 - 10:37 AM

I think probably using the version that you suggested is the simplest method. Nice a precise.

I was just curious about the 'real world' versions of text book example since I'm simply working through the grammar and elements of the language.

Thanks for your help.

#6 swiftcoder   Senior Moderators   -  Reputation: 10231

Like
0Likes
Like

Posted 09 September 2012 - 01:06 PM

That makes sense considering the size of an unsigned is 4. Does this mean I can't necessarily take it on faith? Is the size actually implementation dependent too?

I honestly can't see a reason *why* the compiler would allocate a smaller size than the size of the type you have requested. If you want a maximum size of 1, use a char as the base type for your bitfield.

Tristam MacDonald - Software Engineer @Amazon - [swiftcoding]


#7 BinaryPhysics   Members   -  Reputation: 294

Like
0Likes
Like

Posted 09 September 2012 - 03:56 PM

I honestly can't see a reason *why* the compiler would allocate a smaller size...


The example in "The C Programming Language simply states that flags should be made up of unsigned integers.

#8 swiftcoder   Senior Moderators   -  Reputation: 10231

Like
1Likes
Like

Posted 09 September 2012 - 04:33 PM

The example in "The C Programming Language simply states that flags should be made up of unsigned integers.

The MSDN page suggests that allowing char-sized bitfields is a non-standard extension, which would support that view. I'd be interested to see the relevant reference to the standard.

Although, it also strikes me that there is very little reason to allocate a struct of less than 4 bytes - if one is encapsulating that little data, there is a design issue. Not to mention that the overhead of dynamically allocating such a struct would be ridiculous :)

Tristam MacDonald - Software Engineer @Amazon - [swiftcoding]


#9 Cornstalks   Crossbones+   -  Reputation: 6991

Like
2Likes
Like

Posted 09 September 2012 - 04:49 PM


The example in "The C Programming Language simply states that flags should be made up of unsigned integers.

The MSDN page suggests that allowing char-sized bitfields is a non-standard extension, which would support that view. I'd be interested to see the relevant reference to the standard.

Section 6.7.2.1, paragraph 4:

A bit-field shall have a type that is a qualified or unqualified version of _Bool, signed int, unsigned int, or some other implementation-defined type.



The C++ standard (section 9.6, paragraph 3) is a little different in this regard, and says "A bit-field shall have integral or enumeration type (3.9.1). It is implementation-defined whether a plain (neither explicitly signed nor unsigned) char, short, int, long, or long long bit-field is signed or unsigned"(which implies char, short, int, long, and long long are allowed), and may be why MSVC's compiler allows for more than just _Bool, signed int, and unsigned int.
[ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]

#10 BinaryPhysics   Members   -  Reputation: 294

Like
0Likes
Like

Posted 10 September 2012 - 09:24 AM

I think if I use bit-fields I'll stick to the standard then. Might as well make them any more less specific to a given compiler.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS