• Create Account

Banner advertising on our site currently available from just \$5!

# Sums of uniform randoms

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

11 replies to this topic

### #1Kaptein  Prime Members   -  Reputation: 2207

Like
0Likes
Like

Posted 01 November 2012 - 09:04 PM

This is my last resort after 2 days of devouring information from everywhere
My math skills are relatively poor though, but I have a feeling what I'm trying to do simply can't be done.

I am summing random variables that have uniform distribution, once they are summed they are of normal distribution
With only n = 3, the distribution is already extremely poor.
How can I, and is it even possible to make the distribution uniform again?

This is the ultimate example:

[source lang="cpp"]for (1 ... N) { X += uniform();}X = uniformize(X, N);[/source]

http://www.johndcook.com/blog/2009/02/12/sums-of-uniform-random-values/
the density function for my final distribution should be a constant 1/N

Edited by Kaptein, 01 November 2012 - 11:56 PM.

### #2Inferiarum  Members   -  Reputation: 733

Like
0Likes
Like

Posted 02 November 2012 - 02:57 AM

why not just multiply the variable drawn from the uniform distribution with N

### #3Kaptein  Prime Members   -  Reputation: 2207

Like
0Likes
Like

Posted 02 November 2012 - 05:12 AM

that's just scaling, i need octaves to build a frequency
a real life example would be uniformly distributed whitenoise

Edited by Kaptein, 02 November 2012 - 05:13 AM.

### #4Inferiarum  Members   -  Reputation: 733

Like
1Likes
Like

Posted 02 November 2012 - 07:15 AM

I am not sure what you want to accomplish.

If you want a random variable with uniform distribution from 0 to N, and you have a variable uniformly distributed from 0 to 1 well then you just have to scale it with N.
I do not see the point in creating a random variable with some distribution and then wanting to transform it back to uniform, if you actually have a uniform distribution to begin with.

### #5Bacterius  Crossbones+   -  Reputation: 9881

Like
0Likes
Like

Posted 02 November 2012 - 09:16 AM

I have a feeling you might be confusing the concept of random variable and probability distribution. Could you tell us exactly in detail what you are trying to do?

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

- Pessimal Algorithms and Simplexity Analysis

### #6clb  Members   -  Reputation: 1825

Like
1Likes
Like

Posted 02 November 2012 - 09:36 AM

If you only have an API that allows you to generate uniform bits (0 or 1), you can generate a random number in the range [0, N] in the following way:
int uniform(); // Returns 0 or 1 uniformly at random.

uint32_t RandomNumber(int N)
{
uint32_t NPow2 = largest power-of-2-number smaller or equal to N.
int i = 1;
uint32_t result = 0;
while(i <= NPow2)
{
if (uniform)
result |= i;
i <<= 1;
}
return (uint32_t)((uint64_t)result * (N+1) / (NPow2 << 1));
}


However, this is most likely not the route you want to go through, unless you somehow have a very special setup where your RNG can only supply individual random bits. That sounds rare, and instead most RNG sources are built-in to produce random integers in range [a,b] and random floats in the range [0,1] (which equals roughly 23 bits of randomness).
Me+PC=clb.demon.fi | C++ Math and Geometry library: MathGeoLib, test it live! | C++ Game Networking: kNet | 2D Bin Packing: RectangleBinPack | Use gcc/clang/emcc from VS: vs-tool | Resume+Portfolio | gfxapi, test it live!

### #7Bacterius  Crossbones+   -  Reputation: 9881

Like
1Likes
Like

Posted 02 November 2012 - 08:39 PM

However, this is most likely not the route you want to go through, unless you somehow have a very special setup where your RNG can only supply individual random bits. That sounds rare, and instead most RNG sources are built-in to produce random integers in range [a,b] and random floats in the range [0,1] (which equals roughly 23 bits of randomness).

These special setups are not rare at all, though I agree you are more likely to find them in cryptographic settings (entropy pools) than in conventional PRNG's. Also, a small terminology nitpick: a random bit generator is called an RBG (and its pseudorandom counterpart is a PRBG).

What I would find rare is PRNG's directly producing floating-point numbers. As far as I can tell this is usually provided by a helper function which performs the floating-point division along the lines of random(max) / max. I recall a few purely floating-point generators, but in my opinion it's not a very smart way to go about it, integer arithmetic is very well-defined unlike floating-point arithmetic.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

- Pessimal Algorithms and Simplexity Analysis

### #8Kaptein  Prime Members   -  Reputation: 2207

Like
0Likes
Like

Posted 02 November 2012 - 09:06 PM

I asked a statistician, and I got a final answer: that i couldn't do this due to central limit theorem
When i sum random samples, I am destined to get a skewed probability distribution
nevertheless, interesting conversations above

### #9Álvaro  Crossbones+   -  Reputation: 14630

Like
0Likes
Like

Posted 03 November 2012 - 10:16 AM

Well, not if your uniform random distributions are not independent. The central limit theorem does not apply, and you can build a uniform distribution U(0,7) as the sum of seven U(0,1) distributions that happen to be identical.

But I still don't quite understand what you were trying to do, so this conversation feels kind of irrelevant.

### #10Kaptein  Prime Members   -  Reputation: 2207

Like
0Likes
Like

Posted 03 November 2012 - 11:40 PM

i was using the randoms to create octaved frequencies for biomes
i didnt want noise, since gradiented noise has extremely uneven distribution (think of the derivatives on the extremities of a sphere)
i basically wanted to make saw-like linear frequencies

but...
http://www.johndcook...-random-values/

Edited by Kaptein, 03 November 2012 - 11:42 PM.

### #11Bacterius  Crossbones+   -  Reputation: 9881

Like
1Likes
Like

Posted 03 November 2012 - 11:55 PM

You can transform a uniform distribution into something usable for, say, minecraft-style biome generation, but it's much more involved than just summing up distributions. Look into fractional brownian motion, perlin noise, etc...

The graphs on the page you link are probability distribution functions, which map any output number to its probability of being generated (well, actually, since we're looking at a continuous range of values, it's really the probability of generating an output between some A and B). They don't mean anything else and a normal distribution will still have a large variance.

Fractional brownian motion seems to be the closest thing you are looking for. Basically, it will take a sample of uniformily generated random numbers, and output a sample of "smooth" noise, which when mapped to a heightmap, would give a somewhat realistic terrain instead of a completely random mess. It involves summing up octaves of the same uniform sample, with different frequency and amplitude. Is this what you were looking for?

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

- Pessimal Algorithms and Simplexity Analysis

### #12Kaptein  Prime Members   -  Reputation: 2207

Like
0Likes
Like

Posted 04 November 2012 - 01:13 AM

yes, i scrapped my plans for 1D (linear sums) and later weighted voronoi diagrams..
now im indeed using fBms and simplex
http://jpdict.ath.cx...les/biome2d.png
and all is well

the problem isn't really related to determining the biomes, it's determining the terrain... in real life the terrain determines the biome to some degree
but i'm about to cross that bridge now, so not sure yet

Edited by Kaptein, 04 November 2012 - 01:15 AM.

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

PARTNERS