Jump to content
  • Advertisement
Sign in to follow this  
Zmurf

C++ Console - Char - Binary / Binary - Char

This topic is 4839 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Advertisement
char to binar could be done by getting the ascii value of the char:

int val = (int) character_variable;

then taking this value and figuring binary code from it. To do this, jsut check how big it is and start subtracting stuff. You need a function to do this so you can do it recursively. Example: value = 67

int bits[8]={0};
///code to check for bigger powers of 2
if (value>=64)
{
bits[6]=1; //this is the 7th bit, remember
value-=64;
call_function_again(value);

}

///code for lower powers of two

So, our function would know it is bigger than 64, so it would take 64 from 67, leaving 3. It would also set the bit in our array, bits. Then it would go back (recursive call) and see 3 is bigger than 2. 2 would be taken out and the second bit would be set. We are left with 1. 1 is = to 1, so 1 would be taken out and the first bit would be set. When we get 0, we know we are done.
Binary to char is simply adding the powers of two. Example 00000111
0 0 0 0 0 1 1 1

0 0 0 0 0 4 2 1
=7

then

char character_variable = (char)7;

Share this post


Link to post
Share on other sites
unsigned char value = 42;
const size_t bits = 8;
std::bitset< bits > bitwise_value;

for ( unsigned int i = 0 ; i < bits ; ++i ) bitwise_value = (value >> (bits - i)) & 1; //big endian format
for ( unsigned int i = 0 ; i < bits ; ++i ) bitwise_value = (value >> i) & 1; //little endian format

std::cout << bitwise_value << std::endl;


Share this post


Link to post
Share on other sites
I'd recommend:



#include<homework.h>

CSelf *Self=CSelf::GetInstance();

int main(int argc, char *argv[])
{
CHomework Problem("binary_to_from_char");

if(!Self)
{
perr("WTF? I don't know myself?!");
return 1;
}
else
{
Problem.Solve(Self);
return 0;
}
}


But seriously folks:

void PrintBinary(char PChar)
{
char Mask=0x80;

while(Mask)
{
printf("%c", (PChar&Mask)?('1'):('0'));
Mask>>=1;
}
}


btw monkey, endianness will mean absolutely nothing at the byte level.

Share this post


Link to post
Share on other sites
Quote:
Original post by Omaha
I'd recommend:

If you think something is a homework problem, you shouldn't answer it. Also, your use of a singleton has no place in a to/from binary conversion mechanism.

Quote:
btw monkey, endianness will mean absolutely nothing at the byte level.


Goes to show what you know.

Hardware recieves data in bytes, period - you can't subdivide lower than that in most circumstances, this is true. However, there are a few bitwise (assembly) instructions for which bitwise endian DOES matter, such as these x86 ones:

386 BSF Bit Scan Forward
386 BSR Bit Scan Reverse
386 BT Bit Test
386 BTC Bit Test and Compliment
386 BTR Bit Test and Reset

[source]. Although in C++ you cannot create bitwise-endian dependant code directly - bytewise is possible with naughty casts, but you can't simply cast a char array to a bool pointer, as booleans must be allocated at least the same memory as a char, so you don't get 8 bools to a char. However, it is quite possible in assembly, which is often embeded into C++ programs!!!

Further, the best look is derived from using consistant endian as on the bytewise level. If you use little-endian byte representation, then a big-endian integer, you get a mixed up order of bits, in this order (to use hex on a 16-bit integer)

89ABCDEF01234567

You really should desire:

0123456789ABCDEF

Or:

FEDCBA9876543210

So that you can actually convert the binary into the appropriate number and vicea versa using, say, calc, instead of using some f***ked up worthless piece of middle-endian junk.

This is all to show that while bytewise endian rarely matters, it does in some situations, quite possibly including the printing a binary integer in human readable form - for debugging purpouses, one would want the first character displayed to match up with the result of calling BT with zero (0).

Making such bold assumptions as you have is rarely a good idea.

Also, from an engineering viewpoint: Your second example forcibly ties together the conversion to binary and the display of said binary, these should really be seperate functions or steps in a simple example.

Further, printf() is old and brittle (by which I mean dangerously un-typesafe), and has no place in C++ code which you seem to be using - prefer std::cout instead.

[Edited by - MaulingMonkey on August 13, 2005 8:16:08 PM]

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Owned. Btw the Monkey is totally correct.

But seriously, homework.h?????

Share this post


Link to post
Share on other sites
Quote:
Original post by MaulingMonkey
Although in C++ you cannot create endian-dependant code directly [...]


Uhh... that's totally wrong.

#include <iostream>

int main() {
int i = 1;
if ( *static_cast<char*>(static_cast<void*>(&i)) )
std::cout << "Little Endian\n";
else
std::cout << "Big Endian\n";
}

Share this post


Link to post
Share on other sites
Quote:
Original post by me22
Quote:
Original post by MaulingMonkey
Although in C++ you cannot create endian-dependant code directly [...]


Uhh... that's totally wrong.


Sorry, I misworded that, I meant to state you cannot create bitwise endian-dependant code directly. You are of course right when it comes to bytewise endian, although to do so one must use naughty casts :-).

I've updated my post for clarity - thanks for pointing that out.

Share this post


Link to post
Share on other sites
"Big endian" and "little endian" determine the order in which the bytes of data are stored.

32-bit value 0x12345678

Written on paper it's just that, 0x12345678.
In a little endian machine, if you looked at memory from low to high it would be: 0x78563412
In a big endian machine, if you looked at memory from low to high, it would be: 0x12345678

The bits within each byte are still in the same order. Pointing out some CPU instructions which read bits in a specific order has nothing to do with what endianness is. I could say that any logical AND instruction works on all bits simulataneously and therefore say that endianness means absolutely nothing by that logic. I'm surprised you didn't try to slip the shift instructions in there too since they can operate in both directions. Is a right shift only applicable to little endian machines and a left shift big endian?

And your printf statement was a quite pointless jab. It's only as type "un" safe as any other function that allows you to pass variable numbers of arguments or void pointers. Type safety only becomes a major issue, inheritance aside, if you as a programmer let it, which means you write code that is wrong and you leave it that way, which is a sign of poor if not terrible programming. If you're so dependent on the compiler holding your hand for you when passing arguments, I suggest Java. Saying it has no place in C++ is quite foolish since most likely in any C++ standard library implementation, printf could very well be implemented itself in C++. Finally, making dogmatic statements like the ones you have made are signs of an uncompromising programmer who only sees one or a few solutions to any problem, and a good programmer does not wear such blinders.

Share this post


Link to post
Share on other sites
Omaha: I've realized probably the only thing that's going to show that you don't know what you're talking about is to provide an example of when you might want a big-endian representation, and when you might want a little-endian representation.

The following example uses GNU style inline x86 assembly (note: x86 is little-endian bit and bytewise).

#include <iostream>
using namespace std;

int main () {
const unsigned int bits = 8;

unsigned char value = 0xF0; //11110000 in big-endian
bool binary[ bits ];

// Example binary view use 1:
//Display in traditional big-endian format. This number will be pastable
//into Window's Calculator in Scientific mode. We could then select hex
//mode to get the original value, 0xF0

//Convert:
for ( unsigned int i = 0 ; i < bits ; ++i ) {
binary[ i ] = (value >> (bits - i)) & 1;
}
//Display:
for ( unsigned int i = 0 ; i < bits ; ++i ) {
cout << (binary ? "1" : "0");
}
cout << endl;

//Final result: "11110000"

// Prove (or show to be likely) that the x86 is little-endian even bitwise.
//The BT instruction is used to access the individual bits of value.
//If the x86 has big-endian bit format, the result should be: 11110000
//If the x86 has little-endian bit format, the result should be: 00001111

//Convert:
for ( unsigned int i = 0 ; i < bits ; ++i ) {
bool bit;

asm ("mov %%eax,%%eax;"
"mov %[value],%%al;"
"mov %[index],%%ebx;"
"bt %%ebx,%%eax;"
"setc %[bit];"

: [bit] "=r" (bit) //outputs

: [value] "r" (value) //inputs
, [index] "r" (i)

: "%eax" , "%ebx" , "%flags" //clobbered registers

);

binary = bit;
}
//Display:
for ( unsigned int i = 0 ; i < bits ; ++i ) {
cout << (binary ? "1" : "0");
}
cout << endl;

//Final result: "00001111", not "11110000" (try it!)
//The x86 stores bits in little-endian format. If I'm using "value" as,
//say, an array of 8 mutexes accessed, then I would want the binary
//to correspond to the carry flag after:
// BT[S/C..] value,i
//rather than:
// BT[S/C..] value,(8-i)
//Certainly, this makes more sense!!!
//Assuming we're debugging our program, and don't want to write an
//assload of assembly just to check individual bits, we could simply
//use C++ to display the value in little-endian format:

//Convert:
for ( unsigned int i = 0 ; i < bits ; ++i ) {
binary[ i ] = (value >> i) & 1;
}
//Display:
for ( unsigned int i = 0 ; i < bits ; ++i ) {
cout << (binary ? "1" : "0");
}
cout << endl;

//Final result: "00001111", identical to the assembly version
}








Quote:
Original post by Omaha
"Big endian" and "little endian" determine the order in which the bytes of data are stored.


Bytes and bits. Some people apply it to other things as well, such as dates.

"little endian" date example: 31.12.2005 (dd.mm.yyyy)
"big endian" date example: 2005.12.31 (yyyy.mm.dd)
"middle endian" date example: 12.31.2005 (mm.dd.yyyy - screwy USA system)

Quote:
32-bit value 0x12345678
Written on paper it's just that, 0x12345678.


Right, because English uses big-endian representation for writing numbers.

Quote:
In a little endian machine, if you looked at memory from low to high it would be: 0x78563412


Nope, it'd be: { 0x78 , 0x56 , 0x34 , 0x12 } - this is different. A little-endian processor interprets this as 0x12345678.

0x12 would be stored as {0,1,0,0,1,0,0,0} in binary in little-endian, whereas you're arguing it'd be stored as the big-endian {0,0,0,1,0,0,1,0}. Just because you've written it in big-endian dosn't mean it's stored in big-endian. If you were to look at the memory bitwise, and tried to interpret it in big-endian, you'd get 0x48 instead of 0x12.

Quote:
Original post by Omaha
The bits within each byte are still in the same order.

That's what you're arguing, anyways. The way that BT interprets the registers (and memory) referenced to would seem to indicate otherwise.

Quote:
Pointing out some CPU instructions which read bits in a specific order has nothing to do with what endianness is.


It dosn't? Then what do the CPU instructions which read (or write) individual bytes in a specific order have to do with what endianness is? Nothing? Assuming a processor's smallest addressable/modifyable unit is a word, does that mean byte-endian somehow magically warps to being big-endian? Of course not.

Quote:
I could say that any logical AND instruction works on all bits simulataneously and therefore say that endianness means absolutely nothing by that logic.


Logical AND is bit-endian independant, so for a processor that has only that operation, endianness won't matter, you're right. But the x86 is a big-endian multiple-instruction processor, and that big-endian-ness is reflected in it's instructions to interpret individual bytes and bits, if byte and bit endian differ.

Quote:
And your printf statement was a quite pointless jab.


No it's not. Printf is brittle, requiring you to explicitly specify the type. Assuming you need to print a variable 5 times, and you later need to change it's type, with type-safe code you need only change the type, using printf you must fix every place in your code which is now broken. When I taught C++, the most common error with printing was students screwing up their format specifiers for printf. Funny thing that there's no format specifiers for you to be forced to remember with iostreams. I only wish I had taught them using iostreams instead.

Using iostreams completely prevents bugs or errors associated with printf format specifiers. There's a difference between "dependant on the compiler holding my hand" and utilizing a language feature. Considering I started out with printf myself, before I knew of iostreams when first learning, I'd hardly call myself dependant on them.

The OP appears to be somewhat new to programming. You're encouraging bad practice. Yes, printf is part of the standard library - for backwards compatibility reasons. That dosn't mean you should write new code using it, which is what I'm critising, not it's existance in the standard library.

The fact that "printf is only as bad as" variable length arguments and void pointers is not reason to use printf. Being broke isn't even as bad as being dead, that dosn't mean I want to be broke.

I don't use either of those items, instead, I use operator chaining and appropriate types. Callbacks? Use boost::function. Threads? Use boost::thread. No need for void * arguments when you can bind a correct argument ahead of time.

Quote:
Finally, making dogmatic statements like the ones you have made are signs of an uncompromising programmer who only sees one or a few solutions to any problem, and a good programmer does not wear such blinders.


Whine whine whine, this is an example of a pointless jab, IMO.

[Edited by - MaulingMonkey on August 15, 2005 3:05:34 AM]

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!