• Create Account

## Pi decimals as a random number generator

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

14 replies to this topic

### #1rAm_y_  Members

686
Like
0Likes
Like

Posted 13 August 2014 - 08:16 PM

Not sure if there's anything in this but I know random number generators are a sought after thing in programming.

So Pi can be described by the Taylor series 4* (1 - 1/3 + 1/5 - 1/7 + 1/9...)

however nobody has ever been able to find any order in the decimal representation that is produced, some think it's truely a random number, is there anyway to use this as a random number generator?

### #2ApochPiQ  Moderators

21387
Like
8Likes
Like

Posted 13 August 2014 - 08:22 PM

POPULAR

You could, but it's not particularly efficient. If I want to generate a thousand random numbers, I have to expand the Taylor series a painfully long way, which gets into numerical precision problems and other nastiness. There exist much more effective ways to get pseudo-random numbers.
Wielder of the Sacred Wands

### #3AgPF6  Members

165
Like
0Likes
Like

Posted 13 August 2014 - 08:46 PM

Hello rAm_y,

It also depends on the accuracy of your simulation. If you just need a number to get an index into something, for example some bonus item, then speed is more important. Look into XorShift.

### #4Álvaro  Members

20253
Like
8Likes
Like

Posted 13 August 2014 - 09:03 PM

POPULAR

No, the digits of pi are not random. They can be generated using a very short program, and that's the very definition of non-random.

### #5Bacterius  Members

13100
Like
3Likes
Like

Posted 13 August 2014 - 11:25 PM

You could, but it's not particularly efficient. If I want to generate a thousand random numbers, I have to expand the Taylor series a painfully long way, which gets into numerical precision problems and other nastiness. There exist much more effective ways to get pseudo-random numbers.

The digits of pi can be generated more efficiently (and accurately) than by expanding one of its Taylor series. But, yes, it is not a particularly good idea, because it is slow (it gets slower and slower as you go) and the digits of pi are not known to be uniformly distributed anyway. Do not use it as a pseudorandom number generator, it has zero advantages and many disadvantages compared to a multiply-and-carry generator or a Mersenne twister, or a counter-mode pseudorandom permutation.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

### #6plainoldcj  Members

1068
Like
0Likes
Like

Posted 14 August 2014 - 07:52 AM

EDIT: bullshit comment, ignore

Edited by plainoldcj, 14 August 2014 - 08:04 AM.

### #7Buckeye  GDNet+

10739
Like
0Likes
Like

Posted 14 August 2014 - 09:19 AM

No, the digits of pi are not random. They can be generated using a very short program, and that's the very definition of non-random.

If, by "not random" you mean the same sequence would be generated each time, I would agree. If you really mean to say that the digits of pi are not random, mathematicians at Purdue, Berkeley, and elsewhere appear to have concluded, at best, that pi's randomness is an open question. I can find no reference that indicates that (in the sequence of digits of pi) the digit following any finite sequence can be predicted, which is one indication of randomness.

With regard to the OP:

is there anyway to use this as a random number generator?

If you need to generate a number that cannot be predicted before it's generated (perhaps the intent of Alvaro's statement), that's problematic.

For instance, assuming that the digits of pi are random (as mentioned above, there are questions in that regard,) you would have to develop an algorithm to randomly select a sequence from those digits. Therein lies the rub. A random selection algorithm such as that would either be predictable or would need yet another such supporting algorithm.

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.

You don't forget how to play when you grow old; you grow old when you forget how to play.

### #8Álvaro  Members

20253
Like
0Likes
Like

Posted 14 August 2014 - 11:47 AM

The only precise notion of randomness I am familiar with is Kolmogorov randomness. Under that definition, a string is considered random if it is shorter than any program that can generate it. A sufficiently long string of digits from pi will certainly be longer than a program that generates it, and therefore it is not random.

Of course, by this definition, the output of a pseudo-random number generator is never random. The article from Purdue seems to be concerned with whether the digits of Pi pass something like the Diehard tests. The article from Berkeley talks about the quest to prove pi is normal in some base. Those might be interesting research questions but, even if the answers were affirmative, it would make very little sense to use digits of pi as pseudo-random numbers. There are much cheaper ways to generate high-quality pseudo-random numbers.

### #9SeanMiddleditch  Members

16722
Like
3Likes
Like

Posted 14 August 2014 - 11:47 AM

Completely off-topic, but this thread is reminding me of pifs (Pi as a file storage mechanism).

Game Developer, C++ Geek, Dragon Slayer - http://seanmiddleditch.com

C++ SG14 "Games & Low Latency" - Co-chair - public forums

Wargaming Seattle - Lead Server Engineer - We're hiring!

### #10aregee  Members

1072
Like
0Likes
Like

Posted 14 August 2014 - 12:36 PM

No, the digits of pi are not random. They can be generated using a very short program, and that's the very definition of non-random.

Technically, pseudo random generators aren't really random according to those criterias either.  They too, are generated using a very short program, and their progression is determined as long as you have the seed, which in a lot of random number generators can be set to any value you want.

Another issue with random number generators is distribution of numbers.  Some generators slightly favours the numbers near to zero, for instance.  That is because of the use of the modulus operator that is often used together with random number generators (modulus bias), or because of the algorithm itself.

That is one of the reasons you have more modern random number generators that takes this into consideration, like the arc4random_uniform() you find on the OS X platform, for instance, that is made to avoid this modulus bias.

### #11aregee  Members

1072
Like
0Likes
Like

Posted 14 August 2014 - 12:49 PM

Completely off-topic, but this thread is reminding me of pifs (Pi as a file storage mechanism).

Given that you can actually find every sequence of bytes in PI no matter how long, which I doubt, (you can't find PI inside PI, for instance, since it is a transcendental number, and if you could, you would actually have an irrational number, which is algebraic numbers) I wonder the amount of metadata you would have to write down to "store your file in PI".

Edited by aregee, 14 August 2014 - 12:51 PM.

### #12Bacterius  Members

13100
Like
2Likes
Like

Posted 14 August 2014 - 01:48 PM

Given that you can actually find every sequence of bytes in PI no matter how long, which I doubt, (you can't find PI inside PI, for instance, since it is a transcendental number, and if you could, you would actually have an irrational number, which is algebraic numbers) I wonder the amount of metadata you would have to write down to "store your file in PI".

Yes, that is the joke, the metadata must be at least as large as the data itself, as expected from a simple pigeonhole argument (but some people still believe you can get something for nothing, which makes for some hilarious bug reports/issues)

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

### #13aregee  Members

1072
Like
2Likes
Like

Posted 26 August 2014 - 07:40 AM

Yes, that is the joke, the metadata must be at least as large as the data itself, as expected from a simple pigeonhole argument (but some people still believe you can get something for nothing, which makes for some hilarious bug reports/issues)

I was refreshing my weak knowledge on Huffman coding, since I am making a MP3 decoder just for "fun", and stumbled upon this quote that made me think about your mention of the pigeonhole principle in this regards:

"In general, data cannot be compressed. For example, we cannot losslessly represent all m-bit strings using (m − 1)-bit strings, since there are 2(pow)m possible m-bit strings and only 2(pow)m−1 possible (m−1)-bit strings."

This is out of context though, if it sounds weird that "In general, data cannot be compressed".  Here is the link to get the context:

http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-046j-design-and-analysis-of-algorithms-spring-2012/lecture-notes/MIT6_046JS12_lec19.pdf

Edited by aregee, 26 August 2014 - 07:42 AM.

### #14Bacterius  Members

13100
Like
0Likes
Like

Posted 26 August 2014 - 08:43 AM

Yes, that is the joke, the metadata must be at least as large as the data itself, as expected from a simple pigeonhole argument (but some people still believe you can get something for nothing, which makes for some hilarious bug reports/issues)

I was refreshing my weak knowledge on Huffman coding, since I am making a MP3 decoder just for "fun", and stumbled upon this quote that made me think about your mention of the pigeonhole principle in this regards:

"In general, data cannot be compressed. For example, we cannot losslessly represent all m-bit strings using (m − 1)-bit strings, since there are 2(pow)m possible m-bit strings and only 2(pow)m−1 possible (m−1)-bit strings."

This is out of context though, if it sounds weird that "In general, data cannot be compressed".  Here is the link to get the context:

http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-046j-design-and-analysis-of-algorithms-spring-2012/lecture-notes/MIT6_046JS12_lec19.pdf

That is correct. All that lossless compression schemes do is look at the input to discover that some patterns are more likely than others, and then process the data such that these common patterns map to a shorter representation while less common patterns map to a (necessarily) longer representation. They feed on redundancy in the input, and operate on the assumption that stuff that human beings like to use, like text files and music, are not just white noise but have structure to them. In some abstract sense, their goal is to extract information from data. If you feed any lossless compression scheme a stream of uniformly random bits, it will never, ever succeed in compressing it on average, as it should be. This mirrors the laws of thermodynamics, which is not surprising given that thermodynamics deal with disorder and structure, which is very much what information is (in fact, this is why information content is also called entropy).

Lossy compression schemes, on the other hand, simply trade correctness for efficiency, so that you can get better compression if you don't need 100% of the data intact. This is often the case when dealing with perceptual media like pictures and audio, because our brains have some rather powerful built-in error-correction features, but is typically unacceptable when it comes to structured data like source code or even simple text (we can also correct mangled text to some extent, but it's much more work and still relies on context being present, as usual).

I know that there is a very old gamedev.net thread where the thread author claims he's come up with a lossless compression scheme which always shortens its input. It's a fun read if you are interested in the subject. Here, I dug it out for you: clicky.

Edited by Bacterius, 26 August 2014 - 08:49 AM.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

### #15aregee  Members

1072
Like
0Likes
Like

Posted 26 August 2014 - 12:22 PM

That is correct. All that lossless compression schemes do is look at the input to discover that some patterns are more likely than others, and then process the data such that these common patterns map to a shorter representation while less common patterns map to a (necessarily) longer representation. They feed on redundancy in the input, and operate on the assumption that stuff that human beings like to use, like text files and music, are not just white noise but have structure to them. In some abstract sense, their goal is to extract information from data. If you feed any lossless compression scheme a stream of uniformly random bits, it will never, ever succeed in compressing it on average, as it should be. This mirrors the laws of thermodynamics, which is not surprising given that thermodynamics deal with disorder and structure, which is very much what information is (in fact, this is why information content is also called entropy).

Lossy compression schemes, on the other hand, simply trade correctness for efficiency, so that you can get better compression if you don't need 100% of the data intact. This is often the case when dealing with perceptual media like pictures and audio, because our brains have some rather powerful built-in error-correction features, but is typically unacceptable when it comes to structured data like source code or even simple text (we can also correct mangled text to some extent, but it's much more work and still relies on context being present, as usual).

I know that there is a very old gamedev.net thread where the thread author claims he's come up with a lossless compression scheme which always shortens its input. It's a fun read if you are interested in the subject. Here, I dug it out for you: clicky.

Thank you for the link!  it was quite entertaining.

My original answer got lost, so I got a bit discouraged to repeat.

Mental note to self...  Do not use Google Chrome Canary to write in discussion forums.

Edited by aregee, 26 August 2014 - 12:23 PM.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.