# Spreading value x randomly between x amount of elements

This topic is 902 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I'm currently working on a simple alogirthm that has to spread an undefined number (int) across an undefined number of elements randomly. For example i want to spread 50 int randomly across 8 elements.

Currently i'm generating random floats, normalizing them and then converting them to their int counterparts. It works but this feels like a rather convoluted mess to me.

If anyone can suggest a more elegant way to solve this, i'd be happy to hear it.

Edited by Louwin

##### Share on other sites

I don't have an elegant way but when I read your title my method for doing this is almost exactly how you are approaching it. There are 2 steps, first define your distribution, second distribute.

You are defining your distributions with random floats and then normalizing them and finally you distribute that integer. This sounds like a solid approach to me rather than a convoluted mess.

When you actually distribute the int are you doing something like:

elementValue = (int)(weight*initialValue);?

Are the random numbers in the range 0-1?

Edited by Nanoha

##### Share on other sites

I don't have an elegant way but when I read your title my method for doing this is almost exactly how you are approaching it. There are 2 steps, first define your distribution, second distribute.

You are defining your distributions with random floats and then normalizing them and finally you distribute that integer. This sounds like a solid approach to me rather than a convoluted mess.

When you actually distribute the int are you doing something like:

elementValue = (int)(weight*initialValue);?

Are the random numbers in the range 0-1?

Thats essentially what im doing, the problem however is that you are slicing off the decimal numbers, the result is that in the end the numbers dont add up to the initial input. With the example of spreading 50 across 8 elements randomly the total sum of the output is ~45 rather than 50. I have some extra code to deal with this issue, and that is where it starts to feel messy and convoluted.

I'm not sure that is the best way to deal with it or if there is a more elegant solution.

EDIT: Yes, random numbers are initially between 0 and 1, after normalizing them all numbers combined add up to 1

Edited by Louwin

##### Share on other sites

This is what sprung to mind for me, it shouldn't be that messy or convoluted if you remove the same value you generated as you dispense.

 int[] elements = new int[10];
int amount = 50;
for (int i = 0; amount > 0; ++i)
{
int dispenseAmt = Random.Range(0, amount);
amount -= dispenseAmt;
elements[i] += dispenseAmt;

if(i == elements.Length)
{
i = 0;
}

}


##### Share on other sites

I don't have an elegant way but when I read your title my method for doing this is almost exactly how you are approaching it. There are 2 steps, first define your distribution, second distribute.

You are defining your distributions with random floats and then normalizing them and finally you distribute that integer. This sounds like a solid approach to me rather than a convoluted mess.

When you actually distribute the int are you doing something like:

elementValue = (int)(weight*initialValue);?

Are the random numbers in the range 0-1?

Thats essentially what im doing, the problem however is that you are slicing off the decimal numbers, the result is that in the end the numbers dont add up to the initial input. With the example of spreading 50 across 8 elements randomly the total sum of the output is ~45 rather than 50. I have some extra code to deal with this issue, and that is where it starts to feel messy and convoluted.

I'm not sure that is the best way to deal with it or if there is a more elegant solution.

EDIT: Yes, random numbers are initially between 0 and 1, after normalizing them all numbers combined add up to 1

I see what you mean with truncating the decimal you will have some left over. Ferrous' method seems to avoid this. It does seem susceptible to having one huge value which might not be desirable (your original method seems to be less vulnerable to that).

Another method you could try:

int value = 50;
const int numElements = 8;
int elements[numElements]; // all zeroed
int random = 0;
for(int i = 0; i < value; i++)
{
random = RandomInt(0, numElements - 1);
elements[random] += 1;
}

Essentially pulling one off at a time and throwing it in a random box. Might not be the best approach if numElements is large. But it is simple, doesn't suffer from having anything left over and should be less susceptible to having one element be very large compared to the rest. I don't know a great deal about distributions and probability but with Ferrous' method I think that if you get a very large first number then that will force subsequent elements to have low values (which then isn't quite random any more, well it is random but there are different randoms.. I'm confusing myself now).

Edited by Nanoha

##### Share on other sites

This is what sprung to mind for me, it shouldn't be that messy or convoluted if you remove the same value you generated as you dispense.

 int[] elements = new int[10];
int amount = 50;
for (int i = 0; amount > 0; ++i)
{
int dispenseAmt = Random.Range(0, amount);
amount -= dispenseAmt;
elements[i] += dispenseAmt;

if(i == elements.Length)
{
i = 0;
}

}


I considered this option already, but an algorithm like this would favor the first elements in the array significantly. An alternative option i considered was using a max value, but that option could create a disproportional big number for the last element if the others all roll low and you use the remainder for the last element, or have all 0 values at the end of the array if everything rolls very high.

Another method you could try:

int value = 50;
const int numElements = 8;
int elements[numElements]; // all zeroed
int random = 0;
for(int i = 0; i < value; i++)
{
random = RandomInt(0, numElements - 1);
elements[random] += 1;
}

I also considered this, this would definately give the most desirable result, but would result in poor performance when you have a very large value, if for example you want to spread like 1.000.000 across 5 elements, you'd have to go through the loop 1.000.000 times rather than 5, thats a big deal... as you pointed out yourself

##### Share on other sites

What about doing your first option (forming a distribution and then distributing) then work out how much is left and scattering them randomly?

		int value = 1000000;
const int numElements = 8;
float weights[numElements];
float weightTotal = 0;
int elements[numElements];

for (int i = 0; i < numElements; i++)
{
weights[i] = RandomFloat(0, 1);
weightTotal += weights[i];
}

for (int i = 0; i < numElements; i++)
weights[i] /= weightTotal;

int elementTotal = 0;
for (int i = 0; i < numElements; i++)
{
elements[i] = roundf(weights[i] * value);
elementTotal += elements[i];
}

int difference = value - elementTotal;

for (int i = 0; i < difference; i++)
elements[RandomInt(0, numElements - 1)] += 1;



That does seem to go back to the 'convoluted' that you were aiming to avoid though.

Edited by Nanoha

##### Share on other sites
public int[] randomize(int value, int arraySize)
{
float[] distributions = new float[arraySize];
int[] output = new int[arraySize];
float length = 0.0f;
for (int i = 0; i < arraySize; i++)
{
distributions[i] = (float)random.NextDouble();
length += distributions[i];
}

for (int i = 0; i < arraySize; i++)
distributions[i] /= length;

float sum = 0.5f;
for (int i = 0; i < arraySize; i++)
{
distributions[i] *= value;
sum += distributions[i] % 1.0f;
output[i] = (int)Math.Floor(distributions[i]);
if (sum >= 1.0f)
{
sum--;
output[i]++;
}
}

return output;
}


That is what my code looks like currently. Thanks to you i realized an embarrassing mistake (had 2 loops to generate the random float and to add the value to the 'length')

I also had the problem that sometimes distributions added up to 49.9999 instead of 50 (i assume this is simply due to rounding errors) so i had to check if it was the final element, and if it was i had to check if the sum was bigger than 0. This could simply be removed and have the sum start at 0.5 rather than 0. This also solves the problem that the first element in the array is never able to round up, while the final one always would be rounded up. The code is like 1/3th of the initial amount of lines now and looks much better. Sometimes you just have to take a break and then look at your code again to notice some obvious mistakes you made...

Edited by Louwin

##### Share on other sites

Unless this is something you are doing every frame, I wouldn't worry about performance impacts.  Unless you've actually profiled and this has come up as a bottleneck.

##### Share on other sites
Here’s my approach: http://ideone.com/PBEL1x

While determining the portion, the fractional component is carried over to the next bin.

#include <iostream>
#include <cstdlib>
#include <ctime>
#include <algorithm>
#include <cmath>
#include <array>
#include <iterator>

int main(int, char*[]){
using namespace std;
srand(time(0));

// setup bins
int total = 5000;
const int MAX_BINS = 25;
array<int,MAX_BINS> bins;

// determine weighted distribution
array<float,MAX_BINS> weights;
generate(weights.begin(), weights.end(), [](){
return rand()*1.f/RAND_MAX;
});
float weight_sum = accumulate(weights.begin(), weights.end(), 0.f);

// determine portions
// use "extra" as an accumulator to determine when to add an extra 1 to the bin
float extra = 0;
int used = 0;
for ( int idx = 0; idx < MAX_BINS; ++idx ) {
float actual = weights[idx]/weight_sum * total + extra;
int count = (int)actual;

extra = fmodf(actual, 1.f);
bins[idx] = count;
used += count;
}

// any accumulation error is added to the first bin
int error = error;
bins[0] += total - used;

// output results
cout << "total:\t" << total
<< "\nbins:\t";

copy(bins.begin(), bins.end(), ostream_iterator<int>(cout, "\t"));
cout << "\nerror:" << error
<< "\nextra:" << total - accumulate(bins.begin(),bins.end(),0) << '\n';
}

Output:
total:	5000
bins:	119	24	253	186	176	121	283	85	332	51	350	34	304	7	50	371	338	52	333	363	280	372	92	140	284
error:	0
extra:	0

And is flexible where there isn’t much to distribute:
total:	3
bins:	0	0	0	1	0	1	0	0	0	1
error:	0
extra:	0

EDIT:
Actually, I just realized that this isn’t the best approach for small numbers over numerous bins, as the accumulator will favor bins toward the end. Edited by fastcall22

1. 1
2. 2
Rutin
21
3. 3
4. 4
A4L
15
5. 5
khawk
14

• 13
• 26
• 10
• 11
• 9
• ### Forum Statistics

• Total Topics
633737
• Total Posts
3013612
×