[C++]BitStream

Started by
6 comments, last by Sirisian 15 years, 10 months ago
Hi guys! I need to create a bitstream to compress data and then send it over the internet..... Can someone tell me how theorically it should work? i know of the c++ bitshift >> and <<, but....can't understand how to insert, for example, a bit inside a char :/
Advertisement
Quote:Original post by roby65
Hi guys!
I need to create a bitstream to compress data and then send it over the internet.....
Can someone tell me how theorically it should work?

i know of the c++ bitshift >> and <<, but....can't understand how to insert, for example, a bit inside a char :/


You haven't given us nearly enough information.
Recent thread with links to relevant topics.
for example, i have to put 5 bools inside a char......how to do simply:

class.addBool(true)
class.addBool(false)

etc.?
Quote:Original post by roby65
for example, i have to put 5 bools inside a char......how to do simply:

class.addBool(true)
class.addBool(false)

etc.?


I would have to ask if you're sure that bit-level compression is really what you want.

If it is then the following links will tell you everything you need to know.

This link covers the basics of bit manipulation (same as I posted in the other thread):

Becoming bitwise

This link describes how to represent and parse bit streams of arbitrary length:

Bitstream Parsing in C++
Quote:Original post by fpsgamer
This link covers the basics of bit manipulation (same as I posted in the other thread):

Becoming bitwise


There's potential for a good reference there, but unfortunately a lot of issues. I spotted: assumption that signed char == char; "the maximum value six bits can hold is 127" (should be seven bits - what he presumably meant was bits 0 to 6 inclusive); something about representing the "other 5" digits in hex vs. decimal (should be 6 - the guy apparently has difficulty with the counting-from-zero concept); a very strange description of how signed numbers are represented (suggesting a sign-magnitude representation, I think, with no indication that this is actually platform-specific); reference to near and far pointers (a rare concern these days); description of the 'enum' type having the same range as signed int (this is up to the compiler AFAIK, and in C++, each enum makes its own type); confusing missing superscripts (though he apparently knows how to make superscripts - witness strange constructs like "2 to the 4th power"); and "I haven't bench tested Horsmeier's Bitwise arithmetic functions above, so I have no idea if they are optimised" (wtf?!? As if basic arithmetic could be optimized, or as if the language were interpreted!).
Quote:Original post by Zahlman
Quote:Original post by fpsgamer
This link covers the basics of bit manipulation (same as I posted in the other thread):

Becoming bitwise


There's potential for a good reference there, but unfortunately a lot of issues. I spotted: assumption that signed char == char; "the maximum value six bits can hold is 127" (should be seven bits - what he presumably meant was bits 0 to 6 inclusive); something about representing the "other 5" digits in hex vs. decimal (should be 6 - the guy apparently has difficulty with the counting-from-zero concept); a very strange description of how signed numbers are represented (suggesting a sign-magnitude representation, I think, with no indication that this is actually platform-specific); reference to near and far pointers (a rare concern these days); description of the 'enum' type having the same range as signed int (this is up to the compiler AFAIK, and in C++, each enum makes its own type); confusing missing superscripts (though he apparently knows how to make superscripts - witness strange constructs like "2 to the 4th power"); and "I haven't bench tested Horsmeier's Bitwise arithmetic functions above, so I have no idea if they are optimised" (wtf?!? As if basic arithmetic could be optimized, or as if the language were interpreted!).


I didn't realize that article was so wtf-worthy. The problem lies in the fact that I actually read that article 5 years ago, when I knew significantly less than I do now. I guess thats what I get for not re-reading it before posting it :)

Thanks for pointing this out!

Quote:
description of the 'enum' type having the same range as signed int (this is up to the compiler AFAIK, and in C++, each enum makes its own type)


You're right. The underlying type of an enum is defined to be "large enough" to hold its enumerants. Also whether it is signed or unsigned is undefined.

Quote:
I spotted: assumption that signed char == char;


This is something I myself never quite understood. Why is it that in C++ char, signed char and unsigned char are taken to each be distinct types.

What are the exact implications of this?
You mean like this?
I can't verify if any of that is right anymore it's been a while. The sizeof(type) probably shouldn't be used as it assumes the types don't change. This assumes you've learned template specialization and know binary operators well enough to follow the logic.

This topic is closed to new replies.

Advertisement