Sign in to follow this  

[C++]BitStream

This topic is 3456 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi guys! I need to create a bitstream to compress data and then send it over the internet..... Can someone tell me how theorically it should work? i know of the c++ bitshift >> and <<, but....can't understand how to insert, for example, a bit inside a char :/

Share this post


Link to post
Share on other sites
Quote:
Original post by roby65
Hi guys!
I need to create a bitstream to compress data and then send it over the internet.....
Can someone tell me how theorically it should work?

i know of the c++ bitshift >> and <<, but....can't understand how to insert, for example, a bit inside a char :/


You haven't given us nearly enough information.

Share this post


Link to post
Share on other sites
Quote:
Original post by roby65
for example, i have to put 5 bools inside a char......how to do simply:

class.addBool(true)
class.addBool(false)

etc.?


I would have to ask if you're sure that bit-level compression is really what you want.

If it is then the following links will tell you everything you need to know.

This link covers the basics of bit manipulation (same as I posted in the other thread):

Becoming bitwise

This link describes how to represent and parse bit streams of arbitrary length:

Bitstream Parsing in C++

Share this post


Link to post
Share on other sites
Quote:
Original post by fpsgamer
This link covers the basics of bit manipulation (same as I posted in the other thread):

Becoming bitwise


There's potential for a good reference there, but unfortunately a lot of issues. I spotted: assumption that signed char == char; "the maximum value six bits can hold is 127" (should be seven bits - what he presumably meant was bits 0 to 6 inclusive); something about representing the "other 5" digits in hex vs. decimal (should be 6 - the guy apparently has difficulty with the counting-from-zero concept); a very strange description of how signed numbers are represented (suggesting a sign-magnitude representation, I think, with no indication that this is actually platform-specific); reference to near and far pointers (a rare concern these days); description of the 'enum' type having the same range as signed int (this is up to the compiler AFAIK, and in C++, each enum makes its own type); confusing missing superscripts (though he apparently knows how to make superscripts - witness strange constructs like "2 to the 4th power"); and "I haven't bench tested Horsmeier's Bitwise arithmetic functions above, so I have no idea if they are optimised" (wtf?!? As if basic arithmetic could be optimized, or as if the language were interpreted!).

Share this post


Link to post
Share on other sites
Quote:
Original post by Zahlman
Quote:
Original post by fpsgamer
This link covers the basics of bit manipulation (same as I posted in the other thread):

Becoming bitwise


There's potential for a good reference there, but unfortunately a lot of issues. I spotted: assumption that signed char == char; "the maximum value six bits can hold is 127" (should be seven bits - what he presumably meant was bits 0 to 6 inclusive); something about representing the "other 5" digits in hex vs. decimal (should be 6 - the guy apparently has difficulty with the counting-from-zero concept); a very strange description of how signed numbers are represented (suggesting a sign-magnitude representation, I think, with no indication that this is actually platform-specific); reference to near and far pointers (a rare concern these days); description of the 'enum' type having the same range as signed int (this is up to the compiler AFAIK, and in C++, each enum makes its own type); confusing missing superscripts (though he apparently knows how to make superscripts - witness strange constructs like "2 to the 4th power"); and "I haven't bench tested Horsmeier's Bitwise arithmetic functions above, so I have no idea if they are optimised" (wtf?!? As if basic arithmetic could be optimized, or as if the language were interpreted!).


I didn't realize that article was so wtf-worthy. The problem lies in the fact that I actually read that article 5 years ago, when I knew significantly less than I do now. I guess thats what I get for not re-reading it before posting it :)

Thanks for pointing this out!

Quote:

description of the 'enum' type having the same range as signed int (this is up to the compiler AFAIK, and in C++, each enum makes its own type)


You're right. The underlying type of an enum is defined to be "large enough" to hold its enumerants. Also whether it is signed or unsigned is undefined.

Quote:

I spotted: assumption that signed char == char;


This is something I myself never quite understood. Why is it that in C++ char, signed char and unsigned char are taken to each be distinct types.

What are the exact implications of this?

Share this post


Link to post
Share on other sites

This topic is 3456 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this