Sign in to follow this  
MARS_999

int xa : 2; huh?

Recommended Posts

Nypyren    12062
[url="http://en.wikipedia.org/wiki/Bit_field"]http://en.wikipedia.org/wiki/Bit_field[/url]

Example about halfway down that page.

I try to avoid using this language feature. I use manual shifting and masking instead. Edited by Nypyren

Share this post


Link to post
Share on other sites
frob    44904
The struct bitfield marker is a leftover part of C. There are almost no good reasons to use them any more.

Back in the old days, in the 70's and 80's when bitfields were good, memory was very expensive. There was no on-die cache until the mid to late '80s so anything that touched memory was slow. There was no CPU pipeline, so if you could use multiple instructions that avoided a trip to memory you could see great performance improvements. Using a bitfield, or in other words, having the compiler perform bit shifts followed by the compares or other operations you need followed by another bitshift back, was much faster than using a single byte of memory.

Today this is the opposite case. Memory is cheap and plentiful. On-die cache can contain more than the largest HDDs of that earlier era; there is practically no benefit to the space usage, and the compressed space may even result in cache alignment penalties. With deep pipelines, the shift operations are expensive and will stall your CPU. Using a bitfield in modern code is a huge performance penalty, and should only be used when you have specific reasons to take that penalty.

Share this post


Link to post
Share on other sites
Washu    7829
[quote name='Martins Mozeiko' timestamp='1339473311' post='4948398']
Clang uses bitfields a lot in its data structures to conserve memory.
[/quote]
Clang is also a C/C++ compiler, and hence its memory usage can explode exponentially on the most trivial of files. It also spends most of its time doing string parsing, and as such the extra overhead of the bit shifting operations is negligible compared to the cost of the string operations.

Share this post


Link to post
Share on other sites
DvDmanDT    1941
It can be useful when dealing with existing binary file formats or potentially when sending data over network.

It's also used sometimes on embedded systems where ram etc is very small. I once had a lab assignment in my introduction to embedded systems class where we were required to use bit fields for something (turning leds on/off by flipping bits or something like that). Turned out that particular version of GCC on that particular platform generated faulty code when dealing with bit fields so we had to resort to manual shifting/masking/xoring and what-not. Edited by DvDmanDT

Share this post


Link to post
Share on other sites
NightCreature83    5002
[quote name='frob' timestamp='1339473142' post='4948396']
The struct bitfield marker is a leftover part of C. There are almost no good reasons to use them any more.

Back in the old days, in the 70's and 80's when bitfields were good, memory was very expensive. There was no on-die cache until the mid to late '80s so anything that touched memory was slow. There was no CPU pipeline, so if you could use multiple instructions that avoided a trip to memory you could see great performance improvements. Using a bitfield, or in other words, having the compiler perform bit shifts followed by the compares or other operations you need followed by another bitshift back, was much faster than using a single byte of memory.

Today this is the opposite case. Memory is cheap and plentiful. On-die cache can contain more than the largest HDDs of that earlier era; there is practically no benefit to the space usage, and the compressed space may even result in cache alignment penalties. With deep pipelines, the shift operations are expensive and will stall your CPU. Using a bitfield in modern code is a huge performance penalty, and should only be used when you have specific reasons to take that penalty.
[/quote]
There still are reasons to use this technique and they will never disappear as long as we have console games. Consoles have a far more restricted memory model then a general PC and they are using modern CPU's.

You also benefit from bit fields when they are embedded in an object where the game or application spawns more than a few thousand of these objects. Compare the size of an object that contains 8 bools with one that contains one unsigned int as a bit field, you have a memory saving of 7*4 bytes per object you create.

There are also ways to break alignment without using bit fields btw
[code]
struct
{
short x;
int y; //Not on a 4 byte alignment here.
}
[/code]

There are a lot of valid ways you can use bit fields you just have to be mindful of what you are doing when you do use them. Edited by NightCreature83

Share this post


Link to post
Share on other sites
Bregma    9199
C was invented in the 1960s to help improve the productivity of the folks writing and maintaining the Unix operating system.

An operating system is a piece of software that sits directly on top of the hardware, converting the various electrical signals into abstract concepts that can be manipulated mathematically to emulate some other phenomena in the minds of observers. Many of those electrical signals are binary in nature (due to the design of binary digital computers) and routed through a collection of wires known as "address lines" and "data lines". These wires are arranged logically so that when a certain pattern of high and low voltages appear on them, other hardware performs some action.

From a logical point of view, arranging an appropriate combination of high and low voltages on the address lines and data lines is seen as "writing a data word to a register" or "reading a data word from a register". The hardware control and status wires were logically seen to be bundled into word-size chunks and "mapped" into memory. Thus, controlling hardware was seen from the point of view of an OS developer as reading and writing words to and from memory, since most CPUs only performed word-oriented memory operations. The hardware, however, generally dealt with bit-oriented data at the control level. For example ready-to-send and ring-detected are binary states.

So, the OS developer sees hardware as a bunch of words in memory, each word containing several bit-oriented fields. Now, he could spend a lot of time writing code to perform bit shifting and masking, but the end result is a lot of duplicated code (as much as 80% of the code) that is hard to read and has a high maintenance overhead. Or, if a new language is being developed to reduce the cost of maintenance overhead, bit field manipulations could be built in and the developers could spend more time drinking coffee and stroking their neck beards.

So, historically, bitfields in C were developed to make low-level OS development easier, faster, and cheaper (as was the entire language). Interestingly enough, OS development continues to this day with new hardware introduced daily and reams of code written taking advantage of the bitfield facility. This feature of the (almost half-a-century-old) language is still widely used and very useful, and definitely one of the reasons for its longevity.

If you're writing application-level code you are unlikely to need to use that language feature, unless you're developing a loader for older binary-format data files (and chances are, there's already a loader available for them, 'cos they're older). It certainly does not hurt to know what bitfields are and how they work, you never know when you'll need to patch your OS kernel.

Share this post


Link to post
Share on other sites
Washu    7829
[quote name='NightCreature83' timestamp='1339505431' post='4948463']
There are also ways to break alignment without using bit fields btw
[code]
struct
{
short x;
int y; //Not on a 4 byte alignment here.
}
[/code]
[/quote]
Uh, no. If the integral type requires a 4 byte alignment then the structure will be padded to ensure a 4 byte alignment, which usually means it will end up actually looking something like:
[code]struct
{
short x;
short __unused__; // padded to ensure proper alignment of the following int.
int y;
};[/code]
The only time the piece of code you posted might not be properly aligned in such a manner was if your compilers packing for the file was altered, typically through explicit control in the source files (for instance in visual studio you can use #pragma pack), or through compiler flags. Edited by Washu

Share this post


Link to post
Share on other sites
Promit    13246
[quote name='Washu' timestamp='1339474028' post='4948399']
[quote name='Martins Mozeiko' timestamp='1339473311' post='4948398']
Clang uses bitfields a lot in its data structures to conserve memory.
[/quote]
Clang is also a C/C++ compiler, and hence its memory usage can explode exponentially on the most trivial of files. It also spends most of its time doing string parsing, and as such the extra overhead of the bit shifting operations is negligible compared to the cost of the string operations.
[/quote]
Not that it worked for Clang, anyway. The amount of memory consumed by a Clang build is enormous compared to GCC or VC, though I still like it the best as a compiler I think.

Share this post


Link to post
Share on other sites
MARS_999    1627
BTW how is CLang looking these days? Is it stable enough to use in real work? I haven't looked into it much but remember hearing about it a few years ago...

Also wondering if anyone is using it with Codeblocks...

Share this post


Link to post
Share on other sites
Hodgman    51223
[quote name='frob' timestamp='1339473142' post='4948396']Back in the old days, in the 70's and 80's when bitfields were good, memory was very expensive. ....anything that touched memory was slow ... so if you could use multiple instructions that avoided a trip to memory you could see great performance improvements. Using a bitfield, or in other words, having the compiler perform bit shifts followed by the compares or other operations you need followed by another bitshift back, was much faster than using a single byte of memory.

Today this is the opposite case. Memory is cheap and plentiful. On-die cache can contain more than the largest HDDs of that earlier era; there is practically no benefit to the space usage, and the compressed space may even result in cache alignment penalties. With deep pipelines, the shift operations are expensive and will stall your CPU. Using a bitfield in modern code is a huge performance penalty, and should only be used when you have specific reasons to take that penalty.[/quote]Are you sure about all of that? When you measure memory access times in terms of CPU-speed ([i]e.g. CPU cycles or FLOPS per register<->RAM round trip time[/i]), then RAM is probably [sup][[i]citation-needed, made up statistics[/i]][/sup] about 10000x [b]slower[/b] now than it was in was in the 80's. Optimising for memory is near-practically the only optimisation that actually matters these days -- surely there's cases where you cans "waste" 10 cycles on unpacking data to avoid a 1000 cycle cache-miss? A shift-by-constant and mask instruction shouldn't be harmful to the pipeline at all ([i]unlike shift-by-variable, which might be microcoded, etc[/i])...

Packing multiple values into a word/register via shifting/masking is a core idiom in systems-level code, taught to every computer science student, and this C/C++ feature is syntactic-sugar aimed at this demographic of systems programmers. To tie back in with what you were saying -- systems-level programming is a huge penalty in a modern applications programming environment, and such environments avoid should C/C++ altogether, unless they have specific reasons to take that penalty ;) Edited by Hodgman

Share this post


Link to post
Share on other sites
bubu LV    1436
[quote name='MARS_999' timestamp='1339558046' post='4948701']
BTW how is CLang looking these days? Is it stable enough to use in real work? I haven't looked into it much but remember hearing about it a few years ago...

Also wondering if anyone is using it with Codeblocks...
[/quote]
Its looking good.
Apple uses it as default compiler for iOS projects as of Xcode 4.3, so most likely all the latest apps for iOS devices are compiled with clang.
Not sure about OSX projects (whether is it default), but you definetely can use it for OSX projects.

For targeting Windows there are few issues with clang, but generally it works fine.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this