• Advertisement
Sign in to follow this  

Big integer (128 bit++)

This topic is 2298 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,

I'm working on a little side project, which basically encodes images (32x32 with 256 colors for now) into integer values and vis versa. I've got the algorythm figured out, but my problem is the space. I need to possibly store a number as high as 256^1024 for bow, later I want to move on to higher resolutions and more colors. So I wondered if there is any standard type in c++ that allows me to store such (insane) numbers? If not, what can I do about it? Any tutorials on how to create your own (unsigned) integer type? I can't calculate the exact value of 256^1024 right now, as I lack a proprieta calculator on my iphone, but as I quess 64 bit unsigned int would be to less. Any suggestions?

Also, what about speed? Would there be any drawback of using a 128 bit or 256 bit integers? With a custom type, if written well (no assembler)?

Also, for encoding a given value into the picture, I need to perform a lot of divisons and modulo (x / 256 repeatetly until x < 0, then for every divison that happend before perform (x / (256) ) % 256). I've heard divisions are very slow, and given my algorythm I may habe to perform up to 1024 of them to decode an image. I do not necessarily need to do this in real time but it would be great if it ran as fast as possible. Any ideas?

Share this post


Link to post
Share on other sites
Advertisement
The term to Google would be "Bignum". There are libraries out there, or you could find information on how to implement your own. I recommend the former if you're just getting work done.

Share this post


Link to post
Share on other sites
/256 and %256 are easily implemented by shifts and masks, at least if you do them on integers.

just x>>8 to divide by 256 and do x&0xff to get the remainder.

Shifts and masks are about as fast as an instruction can be on most processors.
On some, you will even get them for free if you do calculations with the result. (like ARM)

Share this post


Link to post
Share on other sites
@Rip-off: thanks, I think at first I'll go with the former, however I think it could be a good learning experience to do it on my own. Quess the whole project could be a good chance to start with SIMD and SSE/2 etc instructions, couldn't it?

@Olof Hedman:

Thanks, I didn't know about that! Should be a good addition for my main project to speed some things up, too. However as I said, I want to use really, really big numbers (probably 1000 digits). Would shifts and mask work for these too, if I use existing libaries and/or build my own, and if yes, would they be as fast (no floating point)?

Share this post


Link to post
Share on other sites
Hidden

a number as high as 256^1024 for bow


256 * 256 * 256 * ... * 256 (1024 times)?

256 = 2^8 ==> 2^8^1024 = 2^8192. A gigabyte is 2^9, a petabyte is 2^15.

To compute value of 2^8192, you would need vastly more memory than the combined digital storage in existence.

Share this post


Link to post
May I ask what you're trying to achieve? I get the idea you're trying to save space by packing the image in a large integer, but that won't work. You'll need an integer with just as many bits to store the 'compressed' image as when you would store it as byte array. Look up the pigeonhole principle.

Share this post


Link to post
Share on other sites
32*32*8bits = 1024 bits.

The original image is already encoded and it's already a 1024 bit integer. No need to calculate anything.

For example:struct Image {
int w, h;
char pixels[w*h];
}
Pixels are your integer. It's just not represented as such, but using a bignum would not change that, it would probably use ints instead of char, but principle remains the same.

If trying to encrypt such an image, just do character-wise encoding of the pixels.

Some operations are also trivial. For example:
- Image / (i * 256) = pixels in range 0 .. 255-i
- Image mod (i * 256) = pixels[255-i]

Share this post


Link to post
Share on other sites
To both of you: It's not exactly what I want to achieve. It's about a crazy idea I had about a random image generator. Basically in an image of any given size, like 32x32, when choosing a random number (of my envoded image) and by performing a various number of operations (like loop all colors on pixel 24.16), and saving or outputting these images, you would basically, after a large amount of time, get some pictures that actually make some sense. For 256 colors, You have 256^1024 possible combinatons.

I know it may sound stupid. Whe I first got the idea I made a mistake, thinking that evenfor 640*480 with 32bit colors there would be much less combinations (its 16mio^640*480, not 16mio*640*480, appearently). I got stuck with this idea and, even if this can't rwally be used as any serious project for a portfolio, I still at least want to try it. If you ask why I want to use an int representing the image: Just assigning random numbers to the single pixels prove useless as it created just messy images that all looked the same. So, I'm probably better of with a random generated number with 10k digits.. Oh, I know 256^1024 is a huge amount of possible images.. But as I work on this project, I can try out different programming techniques I don't want to apply to my game project without testing first..So, I hope you get the idea^^

Share this post


Link to post
Share on other sites

Just assigning random numbers to the single pixels prove useless as it created just messy images that all looked the same. So, I'm probably better of with a random generated number with 10k digits.


Why would you think a random number of 10k digits would be any different to an array of pixels with randomly assigned values?

Share this post


Link to post
Share on other sites

To both of you: It's not exactly what I want to achieve. It's about a crazy idea I had about a random image generator. Basically in an image of any given size, like 32x32, when choosing a random number (of my envoded image) and by performing a various number of operations (like loop all colors on pixel 24.16), and saving or outputting these images, you would basically, after a large amount of time, get some pictures that actually make some sense. For 256 colors, You have 256^1024 possible combinatons.

I am confused, why do you need to encode a bunch of tiny numbers into one huge number to do this?

Share this post


Link to post
Share on other sites
@Aardvajk: It appears logical to me - If I only caclulate one random number between 0 (oh, I might increase the - no need for pitch-black images) and 256^1024 it is much more likely to hit a"good" image then if I asigend 1024 random numbers per image. Latter looks like some kind of generic noise texture. Always. Maybe because of the way how rand() works, but if I generate one randon double between 0.0 and 1.0 and multiply it with 256^1024, I imagine I get better results. Or am I wrong?

@way2lazy2care: As I said, 1024 rnd ops per image is just too much. Encoding it even makes it easier to keep a database of which Images I already genereated. Memory intensive nevertheless, but a lot more conviently (at least to me).

Share this post


Link to post
Share on other sites

but if I generate one randon double between 0.0 and 1.0 and multiply it with 256^1024, I imagine I get better results
[/quote]
A double between 0 and 1 has less than 64 bits of data in it. It will be much less random, but perhaps it might generate an interesting pattern, I don't know. Certainly, there is a large range of particular

It isn't clear what you mean by images that "make sense". Give us a real example of the kind of images you are trying to build. For example, when building top-down terrain/map images there are lots of well known algorithms that give good results.

Can you give us any more information?

Share this post


Link to post
Share on other sites
Well I know that double has not that high range. I might fix that by substracting a bias values between 0 and 255^3 or something, and by alway generating 1000 images (random number + i) or something.

By images that make sense, I mean anything that reminds of something. Maybe a deformed low-res head of something that
could be a human being. An icon of a game we know. A possible icon for a game. Some generic gui element like a close button. You get the idea? I am aware that it is unlikely. But using one random instead of x may give me something like this if I for example create 1000 images like this every day and check them.

As I said, this has no professional use rather than to prove that my idea works. My main idea was the in a image like 640*480 with 16mio colors, you can depict everything. Just everything that ever existed, or didn't exist. So by choosing the write value for the whole image, you would get a certain picture. I underestimated the amount of possible images, but with 32x32 256 colours, it should be fine..

Share this post


Link to post
Share on other sites
I don't see why generating a double is going to help here, unless the bit patterns that represent doubles between 0 and 1 happen to produce compelling images (which I doubt).

If you want a cool icon/buttons, you could create thousands of random little 16x16 or 32x32 8 bit grey scale images, and I imagine you'd find some interesting ones pretty quickly. You can get the basic shape of an icon, and then clean it up. Likewise, picking the colours yourself would probably be more productive than waiting for the algorithm to generate a pretty looking picture with a good palette.

I think you would be foolish to think that you'll get lots of great images from this algorithm. There are an awful lot of images out there, and most will be useless noise.

Share this post


Link to post
Share on other sites
My thoughts about the double was just that from 0.0 to 1.0 would contain every picture possible. Of course I don't know how 64 bit (ah, even more, I'd use unsigned - this is going to give some precision, isn't it?) and so choosing enough values would give me some results. There is a lot of space for "errors", +-256^2 would, except worst case where a lot of pixels are set to 255, stillt produce the same image, just two pixels messed up. I could even try to increment a 0.0 double by really small values 100 times per frame (if performance is good enough) and pause if the thing starts to look like something. You know, I don't even plan to use these images (unless I get a really, really cool one), but even if I get an image that looks kinda loke a coin, I'd be really really glad :) I could even use it e.g. as a screensaver and make it run whenever I don't use my pc.. Nevertheless. Thanks for the advices! I totally agree that using greyscale images would be a good idea if my color implementation fails - e.g. if I get only noise textures in the first week, that is.

You are right that it would be foolish to expect lots of great pictures. It's just.. With some luck, I might be able to get one. I might even be able to discover some sort of algorythm - even though the numbers themselfs will be too high to inspect, there are certainly some kind of relationships - for example, pictures with more than 60% of the same color are not likely giving a good result, so does pictures with a lot of different colors or the same color spread out. Whew, I quess I'm thinking way to deep into this. When I get home on saturday, I'll simply try it out. If it fails, well, I didn't loose anything - the basic render is set up, I just lacked a proper algorythm and a way to store these large numbers.

Share this post


Link to post
Share on other sites
If you make a huge random number, it will be just as random as making a ton of smaller randoms and then building a huge number from those.

In the first option you just set all bits to a random value at once, in the second options you set them to random bits in small chunks. No difference.

Share this post


Link to post
Share on other sites
Really? Keep in mind, I'd not add them up directly, I'd add them [rand*256*i]. I don't know, it has been a while since I had maths and talked about probability theory and stuff, but it just feels natural to me that the whole number behaves differently..? Also, I'm using the rand() method which afaik only gives pseudo-random nunbers, so obviously when creating multiple rands (1024 in my case) there will be some kind of patter). It's really hard to think that both cases are the same.. Can somebody point me out some hard evidence?

Share this post


Link to post
Share on other sites
Its still just setting every pixel to a random value... :P

There could be a pattern if you for some reason generate a 1TB picture, but even then you could just use that 1024 bit integer inside the random function instead of trying to store the whole picture on a way bigger integer.

I dont think its really possible to create meaningful images by setting everything as random unless you have a few lifetimes to look at the pictures to check if theyre meaningful. You need an algorithm or algorithms to combine so most pictures should have at least portions of them meaningful.

Share this post


Link to post
Share on other sites
Think about this:

It wouldn't be random if there were a difference between creating a lot of small numbers than creating one big number.

Also think about this:

You cannot represent an unlimited amount of pictures using a value between 0.0f and 1.0f simply because numbers are represented with a limited amount of digits. Images that are represented on a computer screen are also limited, so in theory you can theoretically represent all combinations within that limited set using a number between 0.0f and 1.0f, even though it might need incredible huge numbers. Finding the algorithm to map such numbers to all these possible combinations of images is not hard, but it certainly is to find a decent picture within the vast amount of different mappings.

Third thing to think about:

You will by no means be able to compress anything using a method like this.

It may be that I didn't get what kind of imagery you are trying to generate. Even with the limitations I mentioned above, you may be able to get interesting results depending on your choice of algorithm. :)

Share this post


Link to post
Share on other sites

@way2lazy2care: As I said, 1024 rnd ops per image is just too much. Encoding it even makes it easier to keep a database of which Images I already genereated. Memory intensive nevertheless, but a lot more conviently (at least to me).


You'll still generate more images than you'll have time to process doing it the long way and you don't have to obfuscate any of your functionality with a stupidly large number. I'm not positive if generating a 128 bit random int on a 64 bit system is any faster than generating two 64 bit big ints, and I feel like converting this super huge int to a viewable picture will be much much slower than any performance gain you'll get generating one random number.

Share this post


Link to post
Share on other sites
You might want to read about the Library of babel. Which does basically this but with books instead.

In other words, your generated images (if you increase the resolution) will contain photos of you in at every instant in your life, in every angle, both now and in the future, as well as everybody else, and pictures of every book, frames from every film, and every conceivable film, including photos of the sound tracks to every movie, song, and every word ever said as images of sine waves, it will also contain every piece of art, every object in the universe (from all angles, distances etc.). It will also contain all this data in all possible different formats (as the images can be interpreted as binary strings).

As you can see, your task is impossible, because for every one of the photos above, you will never get anything but noise out of it. The chance for getting something recognizable is inconceivably small.

Lets say you had everybody on earth employed working on just looking at your 32x32 images. And they worked for the entire age of the universe. And then you would replace every star in the universe with another instance of the earth, with the same number of people for the same time. Now imagine this universe being one of many, as many as there are sand grains on the earth, and then these super-universes being placed in a grid. The grid is the width of the current universe, with one super-universe being placed every meter. Then imagine this grid having three axises, forming a cube the size of the universe with a super-universe in cube meter . Then repeat this process from the start 18.5 times and you would get how many permutations there are.


(this is (256^1024) / ((13.7 *365*24*60*60) / (7 billion) / (100 sextillion) / (6 * 10^29) / ((10^26)^3))^-19)



As to your question, there is no difference between generating 256 32-bit numbers and one 8192-bit number. So just generate each pixel on it's own. If you use a good random number generator it should not have any discernable pattern for the next 100 years or so.

Share this post


Link to post
Share on other sites
You don't seem to grasp the magnitude of the problem. You think 32*32 at 8-bit is more manageable?
16*16 at 4-bit still gives 3.4028236692093846346337460743177e+38 possible images. If you consider every such icon created by man to date, there's probably still a billion billion billion times more images that have never been created by man, most of which would pretty much look like TV static.

Basically, I think this is what you call a "fool's errand".

Share this post


Link to post
Share on other sites
Vectorian puts it straight.

Also, your assumption that you would have any different result trying to scale up a double is wrong.
Any extra patterns you would see in the double if you scale it up to cover a whole image, would be aliasing artefacts from the low resolution on the double.

sometimes its useful to think about binary numbers as a (rubber) ruler with a fixed number of "steps" on it.
You can then select the length of your ruler freely (0.0 - 1.0, 0-65535, 4.2 - 8.9, or whatever you want), but the number of ticks on the ruler will never change.

What you actually do when you generate your double and multiply it, is you generate a 2 pixels big random image and scale it up to cover the whole image.

Share this post


Link to post
Share on other sites
I can remember musing over a pint in the local bar, 1024 bit numbers, you could generate every bit of software possible for a 1k zx81, after the third pint my brain melted in how many people I'd need to test every combination to see what would work or not never mind in just building the data :D

Share this post


Link to post
Share on other sites
With 1024 bit numbers, you can number all the electrons in the universe and will still have plenty leftover "IDs"... :D

Edit:
...with 1024 bit numbers, you could generate every bit of software possible for a 1k zx81...[/quote]
Wait, wouldn't you need 8192 bit numbers for that?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement