Jump to content
  • Advertisement
Sign in to follow this  

Converting from int to char[] and back?

This topic is 3474 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I'm trying to convert from an integer value to a char[4] array. The char[4] array will represent the color channels of a color class with r,g,b,a variables, each going from 0 to 255. I have it almost working but there is an issue. When converting to char and then casting that to an int to get a value between 0 and 255 the value often turns out negative. That is ok for converting back to int later on but I can't use a negative value in the color channels though and if I make it positive there is no way of knowing if it should be negative when converting back to int later on. Here is code I have which works for converting:
#include <iostream>
#include "windows.h"

int main()
        //Converting to char[]
	int val = 600012;
	char col[4];
	col[0] = (char) (byte)(val &  0xFF  );
	col[1] = (char) (byte)((val >> 8)  & 0x00FF);
	col[2] = (char) (byte)((val >> 16) & 0x0000FF);
	col[3] = (char) (byte)((val >> 24) & 0x000000FF);

        //Cast char as int
	int c0 = col[0]; //Ends up negative, can't use in color channel
	int c1 = col[1]; //Ends up negative, can't use in color channel
	int c2 = col[2];
	int c3 = col[3];
	std::cout << c0 << '\n';
	std::cout << c1 << '\n';
	std::cout << c2 << '\n';
	std::cout << c3 << '\n';

        //Converting back to int
	int v = 0;
	v += (int)(((byte)col[0])& 0xFF);
	v += (int)((((byte)col[1]) << 8) &  0xFF00);
	v += (int)((((byte)col[2]) << 16) & 0xFF0000);
	v += (int)((((byte)col[3]) << 24) & 0xFF000000);

	std::cout << "Converted back: " << v << '\n';

Any idea on how to make it work so that I can assign the chars in the color channels and then later be able to convert back without having to deal with negative values?

Share this post

Link to post
Share on other sites
a char is a signed number from -128 to 127.
a unsigned char ranges from 0 to 255
you negative values are off by exactly 128 (you could therefore just add 128)
but the real fix is use unsigned char col[4]

with some nasty hackery you can get this too (but be wary of endian problems depending on platform)

int color = 600012;
unsigned char *col = ((unsigned char*)&color));
int c0 = col[0];
int c1 = col[1];
int c2 = col[2];
int c3 = col[3];

and it is much more readable way to write you bitshifting, so it is easy to see the full mask of what bits you have (and less error prone, as you know you need 8 hex numbers, where as 0xFF0000 could be just as correct as 0xFF00000 and someone looking at your code probably won't catch that first pass, while 0x00FF0000 vs 0x0FF0000 stands out as very out of place for a color manipulation)
int c0 = (color & 0xFF000000) >> 24;
int c1 = (color & 0x00FF0000) >> 16;
int c2 = (color & 0x0000FF00) >> 8;
int c3 = (color & 0x000000FF) >> 0;

Share this post

Link to post
Share on other sites
Ah yes, I actually figured it out myself just now.:) Defining it as

unsigned char col[4];

makes it work.


Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!