template<class T>
inline void SetBit(T a_Memory, const T a_Modifier)
{
a_Modifier | a_Memory;
}
template<class T>
inline void ToggleBit(T& a_Memory, const T a_Modifier)
{
a_Modifier ^ a_Memory;
}
template<class T>
inline void ClearBit(T a_Memory, const T a_Modifier)
{
a_Memory &= ~a_Modifier;
}
template<class T>
inline const bool TestBit(const T a_Memory, const T a_Modifier)
{
return (a_Modifier & a_Memory) == 0;
}
Testing High order bit?
I need to test the high order bit of a char.
A while back I wrote some bit operations code based on something I read. I''ve never tested it but need to use it now. Does this look about right(I really have no experience with ^ or | or &, etc):
Your code looks very potentially unsafe (and buggy)
SetBit(T a_Memory, <--- A ref needed?)
Why the template? It seems your template parameter can be potentially unsafe.
Look at STL std::bitset instead.
SetBit(T a_Memory, <--- A ref needed?)
Why the template? It seems your template parameter can be potentially unsafe.
Look at STL std::bitset instead.
Not really sure what you are trying to do with those templates???
If you want to test the high bit of a char, the first thing you should do is make sure you are using unsigned chars.
Then to test the high bit, just use:
if ( c & (1 << 7) ) { ...
To set the bit, use
c |= (1 << 7)
To clear it use
c &= ~(1 << 7)
And so on.
If you use the template approach to genericize this for arbitrary bit fields, be very, very careful. I don''t think it will do what you think it will do
--
If you want to test the high bit of a char, the first thing you should do is make sure you are using unsigned chars.
Then to test the high bit, just use:
if ( c & (1 << 7) ) { ...
To set the bit, use
c |= (1 << 7)
To clear it use
c &= ~(1 << 7)
And so on.
If you use the template approach to genericize this for arbitrary bit fields, be very, very careful. I don''t think it will do what you think it will do
--
Games, Anime and more at GKWorld
Meet Bunny luv'' and the girls, ready to perform on your desktop just for you (warning: adult content).
quote:Original post by Anonymous Poster
if using signed chars the high hit will be the sign, so return less than 0
Not always. Some compilers will clear the high bit of a character. Possibly under the assumption that a negative character doesn''t make sense ...
--
Games, Anime and more at GKWorld
Meet Bunny luv'' and the girls, ready to perform on your desktop just for you (warning: adult content).
I honestly don''t mean to be getting on taliesin73''s case, but this is the second post I''ve seen this morning with incorrect information:
Untrue. Bitwise operations are not affected by the sign of the data type. The operations that are affected are right bitshifts and comparing with zero (specifically < 0, <= 0, etc.) That''s it. It''s completely safe to do any of the bitwise operations on a signed simple integer type (char, short, int, long, etc.)
I''ve never been aware of this happening. Clearing the high bit of a integer type gives you a completely different positive value than what was previously negated because numbers are stored in two''s compliment. For instance in chars, 0xFF is -1 signed or 255 unsigned, 0x7F is 127 both signed and unsigned. Spontaneously clearing the high bit would completely change your value. It is always safe to test if a signed type is less than zero to check the high bit, but these functions seem to be designed to work with any bit.
Void was absolutely right that this code will not work--you''re passing things by value, not by reference. The original values will be unchanged once you leave the functions.
Furthermore, this code seems to be fairly pointless in that all it''s doing is creating a templated function call to a simple operation. Something that might be more handy would be:
I almost never write templates, but I''m pretty sure the "inline" isn''t needed. This code will test the bit''th bit of value, which is a little more useful than just an alias for operator &.
quote:
If you want to test the high bit of a char, the first thing you should do is make sure you are using unsigned chars.
Untrue. Bitwise operations are not affected by the sign of the data type. The operations that are affected are right bitshifts and comparing with zero (specifically < 0, <= 0, etc.) That''s it. It''s completely safe to do any of the bitwise operations on a signed simple integer type (char, short, int, long, etc.)
quote:
--original quote--
if using signed chars the high hit will be the sign, so return less than 0
--end quote--
Not always. Some compilers will clear the high bit of a character. Possibly under the assumption that a negative character doesn''t make sense ...
I''ve never been aware of this happening. Clearing the high bit of a integer type gives you a completely different positive value than what was previously negated because numbers are stored in two''s compliment. For instance in chars, 0xFF is -1 signed or 255 unsigned, 0x7F is 127 both signed and unsigned. Spontaneously clearing the high bit would completely change your value. It is always safe to test if a signed type is less than zero to check the high bit, but these functions seem to be designed to work with any bit.
Void was absolutely right that this code will not work--you''re passing things by value, not by reference. The original values will be unchanged once you leave the functions.
Furthermore, this code seems to be fairly pointless in that all it''s doing is creating a templated function call to a simple operation. Something that might be more handy would be:
template<class T>bool TestBit(const T &value, unsigned char bit){ assert (bit < sizeof(T) * 8); return 0 != (value & (1 << bit));}
I almost never write templates, but I''m pretty sure the "inline" isn''t needed. This code will test the bit''th bit of value, which is a little more useful than just an alias for operator &.
You could always create a mask to test it. #define CHAR_HIGH_BIT or something as 128.
Then to test for the high bit:
if (c & CHAR_HIGH_BIT)
{
}
Easier to read and maintain I think. But maybe I am a muppet.
Then to test for the high bit:
if (c & CHAR_HIGH_BIT)
{
}
Easier to read and maintain I think. But maybe I am a muppet.
quote:Original post by Stoffel
I honestly don''t mean to be getting on taliesin73''s case, but this is the second post I''ve seen this morning with incorrect information:
Go right ahead
quote:
Bitwise operations are not affected by the sign of the data type.
You''d think so, wouldn''t you? See below.
quote:
--
Not always. Some compilers will clear the high bit of a character. Possibly under the assumption that a negative character doesn''t make sense ...
--
I''ve never been aware of this happening. Clearing the high bit of a integer type gives you a completely different positive value than what was previously negated because numbers are stored in two''s compliment. For instance in chars, 0xFF is -1 signed or 255 unsigned, 0x7F is 127 both signed and unsigned. Spontaneously clearing the high bit would completely change your value. It is always safe to test if a signed type is less than zero to check the high bit, but these functions seem to be designed to work with any bit.
When I say some compilers, I guess I mean Borland C from whatever version was current in 1994. It may have been a compiler bug and the offical standard says it''s okay. But here''s the basic story:
Some friends of mine were working on a project in which they were using the high bit to flag something, I can''t remember what. But they were using chars, and bit-testing on the high bit. They found their code was not working as expected and, being clever programmers, they asked me to help debug it (getting someone else to debug your code seems to work much better ). I had a look at their code and suggested that they change the char to an unsigned char. They did this, and the code worked as expected.
Was my suggestion the correct solution? Maybe not, but it fixed the problem. We did a bit of analysis just to verify and concluded that Borland was indeed stripping the high bit.
As far as changing the value by stripping off the sign bit, if you''re talking about ASCII, it takes the character out of extended ASCII back into the 7-bit range ... and maybe this is what Borland was doing, on the assumption that these chars were being used as chars and not 8-bit ints.
--
quote:Original post by taliesin73
--snip--
Not always. Some compilers will clear the high bit of a character. Possibly under the assumption that a negative character doesn''t make sense ...
--snip--
We did a bit of analysis just to verify and concluded that Borland was indeed stripping the high bit.
As far as changing the value by stripping off the sign bit, if you''re talking about ASCII, it takes the character out of extended ASCII back into the 7-bit range ... and maybe this is what Borland was doing, on the assumption that these chars were being used as chars and not 8-bit ints.
That would be quite clearly against the standard, since
sizeof(signed X) == sizeof(unsigned X)
must hold for any integral type X. By implicitly clearing the sign bit in a signed char, the size/precision is reduced, but that''s not all: the data type is no longer signed!
So if what you''re saying is correct the Borland compiler made your ''signed 8-bit char'' into an ''unsigned 7-bit char''. I find it a little hard to believe that such a severe bug was not discovered during beta-testing of the compiler.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement