Jump to content

  • Log In with Google      Sign In   
  • Create Account

Pi decimals as a random number generator


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
14 replies to this topic

#1 rAm_y_   Members   -  Reputation: 481

Like
0Likes
Like

Posted 13 August 2014 - 08:16 PM

Not sure if there's anything in this but I know random number generators are a sought after thing in programming.

 

So Pi can be described by the Taylor series 4* (1 - 1/3 + 1/5 - 1/7 + 1/9...)

 

however nobody has ever been able to find any order in the decimal representation that is produced, some think it's truely a random number, is there anyway to use this as a random number generator?



Sponsor:

#2 ApochPiQ   Moderators   -  Reputation: 16396

Like
8Likes
Like

Posted 13 August 2014 - 08:22 PM

You could, but it's not particularly efficient. If I want to generate a thousand random numbers, I have to expand the Taylor series a painfully long way, which gets into numerical precision problems and other nastiness. There exist much more effective ways to get pseudo-random numbers.

#3 AgPF6   Members   -  Reputation: 165

Like
0Likes
Like

Posted 13 August 2014 - 08:46 PM

Hello rAm_y,

It also depends on the accuracy of your simulation. If you just need a number to get an index into something, for example some bonus item, then speed is more important. Look into XorShift.

#4 Álvaro   Crossbones+   -  Reputation: 13907

Like
8Likes
Like

Posted 13 August 2014 - 09:03 PM

No, the digits of pi are not random. They can be generated using a very short program, and that's the very definition of non-random.

#5 Bacterius   Crossbones+   -  Reputation: 9280

Like
3Likes
Like

Posted 13 August 2014 - 11:25 PM

You could, but it's not particularly efficient. If I want to generate a thousand random numbers, I have to expand the Taylor series a painfully long way, which gets into numerical precision problems and other nastiness. There exist much more effective ways to get pseudo-random numbers.

 

The digits of pi can be generated more efficiently (and accurately) than by expanding one of its Taylor series. But, yes, it is not a particularly good idea, because it is slow (it gets slower and slower as you go) and the digits of pi are not known to be uniformly distributed anyway. Do not use it as a pseudorandom number generator, it has zero advantages and many disadvantages compared to a multiply-and-carry generator or a Mersenne twister, or a counter-mode pseudorandom permutation.


The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#6 plainoldcj   Members   -  Reputation: 847

Like
0Likes
Like

Posted 14 August 2014 - 07:52 AM

EDIT: bullshit comment, ignore


Edited by plainoldcj, 14 August 2014 - 08:04 AM.


#7 Buckeye   Crossbones+   -  Reputation: 6275

Like
0Likes
Like

Posted 14 August 2014 - 09:19 AM

No, the digits of pi are not random. They can be generated using a very short program, and that's the very definition of non-random.

 

If, by "not random" you mean the same sequence would be generated each time, I would agree. If you really mean to say that the digits of pi are not random, mathematicians at Purdue, Berkeley, and elsewhere appear to have concluded, at best, that pi's randomness is an open question. I can find no reference that indicates that (in the sequence of digits of pi) the digit following any finite sequence can be predicted, which is one indication of randomness.

 

With regard to the OP:


is there anyway to use this as a random number generator?

 

If you need to generate a number that cannot be predicted before it's generated (perhaps the intent of Alvaro's statement), that's problematic.

 

For instance, assuming that the digits of pi are random (as mentioned above, there are questions in that regard,) you would have to develop an algorithm to randomly select a sequence from those digits. Therein lies the rub. A random selection algorithm such as that would either be predictable or would need yet another such supporting algorithm.


Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.


#8 Álvaro   Crossbones+   -  Reputation: 13907

Like
0Likes
Like

Posted 14 August 2014 - 11:47 AM

The only precise notion of randomness I am familiar with is Kolmogorov randomness. Under that definition, a string is considered random if it is shorter than any program that can generate it. A sufficiently long string of digits from pi will certainly be longer than a program that generates it, and therefore it is not random.

Of course, by this definition, the output of a pseudo-random number generator is never random. The article from Purdue seems to be concerned with whether the digits of Pi pass something like the Diehard tests. The article from Berkeley talks about the quest to prove pi is normal in some base. Those might be interesting research questions but, even if the answers were affirmative, it would make very little sense to use digits of pi as pseudo-random numbers. There are much cheaper ways to generate high-quality pseudo-random numbers.

#9 SeanMiddleditch   Members   -  Reputation: 7152

Like
3Likes
Like

Posted 14 August 2014 - 11:47 AM

Completely off-topic, but this thread is reminding me of pifs (Pi as a file storage mechanism).

#10 aregee   Members   -  Reputation: 1026

Like
0Likes
Like

Posted 14 August 2014 - 12:36 PM

No, the digits of pi are not random. They can be generated using a very short program, and that's the very definition of non-random.

 

Technically, pseudo random generators aren't really random according to those criterias either.  They too, are generated using a very short program, and their progression is determined as long as you have the seed, which in a lot of random number generators can be set to any value you want.

 

Another issue with random number generators is distribution of numbers.  Some generators slightly favours the numbers near to zero, for instance.  That is because of the use of the modulus operator that is often used together with random number generators (modulus bias), or because of the algorithm itself.

 

That is one of the reasons you have more modern random number generators that takes this into consideration, like the arc4random_uniform() you find on the OS X platform, for instance, that is made to avoid this modulus bias.



#11 aregee   Members   -  Reputation: 1026

Like
0Likes
Like

Posted 14 August 2014 - 12:49 PM

Completely off-topic, but this thread is reminding me of pifs (Pi as a file storage mechanism).

 

Given that you can actually find every sequence of bytes in PI no matter how long, which I doubt, (you can't find PI inside PI, for instance, since it is a transcendental number, and if you could, you would actually have an irrational number, which is algebraic numbers) I wonder the amount of metadata you would have to write down to "store your file in PI". biggrin.png


Edited by aregee, 14 August 2014 - 12:51 PM.


#12 Bacterius   Crossbones+   -  Reputation: 9280

Like
2Likes
Like

Posted 14 August 2014 - 01:48 PM


Given that you can actually find every sequence of bytes in PI no matter how long, which I doubt, (you can't find PI inside PI, for instance, since it is a transcendental number, and if you could, you would actually have an irrational number, which is algebraic numbers) I wonder the amount of metadata you would have to write down to "store your file in PI".

 

Yes, that is the joke, the metadata must be at least as large as the data itself, as expected from a simple pigeonhole argument (but some people still believe you can get something for nothing, which makes for some hilarious bug reports/issues) smile.png


The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#13 aregee   Members   -  Reputation: 1026

Like
2Likes
Like

Posted 26 August 2014 - 07:40 AM

Yes, that is the joke, the metadata must be at least as large as the data itself, as expected from a simple pigeonhole argument (but some people still believe you can get something for nothing, which makes for some hilarious bug reports/issues) smile.png

 

 

I was refreshing my weak knowledge on Huffman coding, since I am making a MP3 decoder just for "fun", and stumbled upon this quote that made me think about your mention of the pigeonhole principle in this regards:

 

"In general, data cannot be compressed. For example, we cannot losslessly represent all m-bit strings using (m − 1)-bit strings, since there are 2(pow)m possible m-bit strings and only 2(pow)m−1 possible (m−1)-bit strings."

 

This is out of context though, if it sounds weird that "In general, data cannot be compressed".  Here is the link to get the context: 

 

http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-046j-design-and-analysis-of-algorithms-spring-2012/lecture-notes/MIT6_046JS12_lec19.pdf


Edited by aregee, 26 August 2014 - 07:42 AM.


#14 Bacterius   Crossbones+   -  Reputation: 9280

Like
0Likes
Like

Posted 26 August 2014 - 08:43 AM

Yes, that is the joke, the metadata must be at least as large as the data itself, as expected from a simple pigeonhole argument (but some people still believe you can get something for nothing, which makes for some hilarious bug reports/issues) smile.png

 
I was refreshing my weak knowledge on Huffman coding, since I am making a MP3 decoder just for "fun", and stumbled upon this quote that made me think about your mention of the pigeonhole principle in this regards:
 
"In general, data cannot be compressed. For example, we cannot losslessly represent all m-bit strings using (m − 1)-bit strings, since there are 2(pow)m possible m-bit strings and only 2(pow)m−1 possible (m−1)-bit strings."
 
This is out of context though, if it sounds weird that "In general, data cannot be compressed".  Here is the link to get the context: 
 
http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-046j-design-and-analysis-of-algorithms-spring-2012/lecture-notes/MIT6_046JS12_lec19.pdf


That is correct. All that lossless compression schemes do is look at the input to discover that some patterns are more likely than others, and then process the data such that these common patterns map to a shorter representation while less common patterns map to a (necessarily) longer representation. They feed on redundancy in the input, and operate on the assumption that stuff that human beings like to use, like text files and music, are not just white noise but have structure to them. In some abstract sense, their goal is to extract information from data. If you feed any lossless compression scheme a stream of uniformly random bits, it will never, ever succeed in compressing it on average, as it should be. This mirrors the laws of thermodynamics, which is not surprising given that thermodynamics deal with disorder and structure, which is very much what information is (in fact, this is why information content is also called entropy).

Lossy compression schemes, on the other hand, simply trade correctness for efficiency, so that you can get better compression if you don't need 100% of the data intact. This is often the case when dealing with perceptual media like pictures and audio, because our brains have some rather powerful built-in error-correction features, but is typically unacceptable when it comes to structured data like source code or even simple text (we can also correct mangled text to some extent, but it's much more work and still relies on context being present, as usual).

I know that there is a very old gamedev.net thread where the thread author claims he's come up with a lossless compression scheme which always shortens its input. It's a fun read if you are interested in the subject. Here, I dug it out for you: clicky.

Edited by Bacterius, 26 August 2014 - 08:49 AM.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#15 aregee   Members   -  Reputation: 1026

Like
0Likes
Like

Posted 26 August 2014 - 12:22 PM

That is correct. All that lossless compression schemes do is look at the input to discover that some patterns are more likely than others, and then process the data such that these common patterns map to a shorter representation while less common patterns map to a (necessarily) longer representation. They feed on redundancy in the input, and operate on the assumption that stuff that human beings like to use, like text files and music, are not just white noise but have structure to them. In some abstract sense, their goal is to extract information from data. If you feed any lossless compression scheme a stream of uniformly random bits, it will never, ever succeed in compressing it on average, as it should be. This mirrors the laws of thermodynamics, which is not surprising given that thermodynamics deal with disorder and structure, which is very much what information is (in fact, this is why information content is also called entropy).


Lossy compression schemes, on the other hand, simply trade correctness for efficiency, so that you can get better compression if you don't need 100% of the data intact. This is often the case when dealing with perceptual media like pictures and audio, because our brains have some rather powerful built-in error-correction features, but is typically unacceptable when it comes to structured data like source code or even simple text (we can also correct mangled text to some extent, but it's much more work and still relies on context being present, as usual).

I know that there is a very old gamedev.net thread where the thread author claims he's come up with a lossless compression scheme which always shortens its input. It's a fun read if you are interested in the subject. Here, I dug it out for you: clicky.

 

 

Thank you for the link!  it was quite entertaining. :)

 

My original answer got lost, so I got a bit discouraged to repeat. 

 

Mental note to self...  Do not use Google Chrome Canary to write in discussion forums.

 

awsnap.png


Edited by aregee, 26 August 2014 - 12:23 PM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS