Hex in Two's Compliment.

Started by
10 comments, last by l jsym l 13 years, 1 month ago
Hey, I was just wondering if there is any possible way in c++ to represent a hex number in two's compliment. By this I mean.
Say i have a hexadecimal value of 7FF, or 111 1111 1111 in binary. I need to represent this in two's compliment so I can get a
value of -1 instead of a value of 2046.
If I didn't make myself clear just let me know and I'll try to explain what I mean better.

Thanks.
l jsym l
Advertisement
If I understand what you're asking, then read in the hex number as an unsigned int and reinterpret_cast it as a signed int, assuming that you're on a platform where the integer representation of a signed int is twos complement (which should be pretty much every common platform and most of the goofy ones as well).
You could probably be more clear. An int datatype is "a hex number in two's compliment", so I think you either mean something else or you don't really understand what you mean.

7FF is what... a string? an int? How are you inputting it, and what do you want to output as? Are we talking about strings of ascii characters?

You can't just say "7FF in two's compliment", without knowing how wide the number is. Is this some hypothetical number type that is 11 bits wide? If we're talking about int datatypes, than 7FF is just 7FF, because the number is really "00000000 00000000 00000111 11111111" . That's not a negative number in two's compliment notation.
[size=2]My Projects:
[size=2]Portfolio Map for Android - Free Visual Portfolio Tracker
[size=2]Electron Flux for Android - Free Puzzle/Logic Game

You could probably be more clear. An int datatype is "a hex number in two's compliment", so I think you either mean something else or you don't really understand what you mean.

7FF is what... a string? an int? How are you inputting it, and what do you want to output as? Are we talking about strings of ascii characters?

You can't just say "7FF in two's compliment", without knowing how wide the number is. Is this some hypothetical number type that is 11 bits wide? If we're talking about int datatypes, than 7FF is just 7FF, because the number is really "00000000 00000000 00000111 11111111" . That's not a negative number in two's compliment notation.


I simply stated if I was being clear enough I would clear it up.
What I meant is i have a Hex number 7FF. I also stated that I didn't want it to output 2046, instead I want it to output -1.
Therefore it is stored as an int such as
int a = 0x7FF;

Thanks for the reinterpret_cast though sicrane. I'm trying to figure it out as we speak.
l jsym l
I don't think reinterpret cast will help you, 2046 is still 2046 whether it's an int or an unsigned int.

You'll likely need to implement the logic rules yourself, because an 11-bit number is not a real datatype.
[size=2]My Projects:
[size=2]Portfolio Map for Android - Free Visual Portfolio Tracker
[size=2]Electron Flux for Android - Free Puzzle/Logic Game
K thats what I was thinking too but was unsure of how to do it
l jsym l
My recommendation is to do a 'sign extend'. Basically do what SiCrane and read the number as an unsigned int. Then check the bit that you consider to be the 'sign'. If the sign bit is set then set every bit after the sign to 1. Then reinterpret_cast the result to an int and it should work. I actually wrote a program to do this but this could be homework or something so I don't want to just hand it over.

C++: A Dialog | C++0x Features: Part1 (lambdas, auto, static_assert) , Part 2 (rvalue references) , Part 3 (decltype) | Write Games | Fix Your Timestep!

Yes it is for an assignment sort of so I would like to figure it out myself but I do got a question. Do you know where I can find any information on doing 'sign extends'?
l jsym l

Yes it is for an assignment sort of so I would like to figure it out myself but I do got a question. Do you know where I can find any information on doing 'sign extends'?


Yes, read my previous post :). I cover the basic steps needed to do this, are you confused on any of them in particular? That said, I came up with a simpler way to do what you want that you can do manually without much effort:

1. If hex number is negative continue on!
2. Take the 2's complement of number (to make it positive).
3. Output '-' and then output result of step 2.

Basically a negative number can be printed by turning it positive first.

C++: A Dialog | C++0x Features: Part1 (lambdas, auto, static_assert) , Part 2 (rvalue references) , Part 3 (decltype) | Write Games | Fix Your Timestep!

In this case if the number of bits in the integer is always 11 (which it basically has to be or how else would you do this) then it's a matter of either ORing it with 0xFFFFF800 or not (assuming a 32-bit int).
"In order to understand recursion, you must first understand recursion."
My website dedicated to sorting algorithms

This topic is closed to new replies.

Advertisement