Selfbalancing random number generation
#1 Moderators  Reputation: 3346
Posted 09 August 2001  06:50 PM
#2 Members  Reputation: 232
Posted 09 August 2001  08:50 PM
First, let me describe some terms:
SUM: This term is equal to the sum of all the previous return values from the function. "True" is counted as +1, "False" is counted as 1. Ex: If the fn had returned 10 trues and 8 falses, then SUM would be 2.
C: C is a constant which will need to be tweaked in order to get it to fit perfectly. C is the cap for the difference in the number of true and false responses. For example, if C = 5, then there can be no greater than 5 more true responses than false responses (and vise versa) before the fn forces a response of the opposite type.
The function below will return a weighted probability that can be used as as usual for determining a true or false. It needs to be given a probability (called x below) between 0 and 1 (inclusive). Here is the actual (simple) fn in psuedocode:
if (sum >=0)
p = x  (SUM / C)*x
else
p = x  (SUM / C)*(1x)
I hope this helps. If it''s not what you are looking for, let me know and I''ll see what I can do.
Mike Melson
#3 Anonymous Poster_Anonymous Poster_* Guests  Reputation:
Posted 10 August 2001  03:34 AM
If you really care about this kind of binary coin flipping example, you will be much better off by randomly sorting or permuting an equal number of true and false values.
#4 Members  Reputation: 122
Posted 10 August 2001  09:23 AM
quote:
Original post by Kylotan
Imagine you are about to flip a coin 10 times. You expect 5 heads. So you flip it coin 9 times, and get 5 heads and 4 tails. Given a proper coin, the 10th flip has a 50% (p = 0.5) chance of being heads. This means that, once you''ve had 9 flips, you have a 50% chance of scoring 6 heads. However at the start, it was a 50% chance of scoring 5.
A small correction. Probability that in 10 flips you get 5 heads
is not 0.5 but smaller. To solve such problems (k succeses in N trials) you must use Bernoulli schema:
P(N,k)= N!/(k!*(Nk)!)*p^{k}*(1p)^{Nk}
N  number of trials, k  number of succeses
p  probability of success, P(N,k)  probability that in
N trials you have k succeses.
So P(10,5) = (10!/5!*5!)*1/1024 = 252/1024
quote:
I have seen the term ''stochastic'' used in some contexts similar to this, but have yet to find an explanation of the term that is both precise and yet not overly complex. Perhaps this is the solution to my problem, but I don''t know.
The only meaning of "stochastic" I know about are stochastic processes. In such case stochastic means random as opposite to deterministic. I dont think they can help you,but I can be wrong.
quote:
(For those who are interested, basically I just want this so that I can have a random number system in a game that allows for freak events, but takes corrective action to reduce the chance of an unlucky streak damaging a player''s chances.)
Maybe you can try something like this, each time the "bad" event
happens decrease its basic probability by some amount. After that
let it slowly grow, so after some time it reaches its base probability.
K.
#5 Moderators  Reputation: 3346
Posted 10 August 2001  09:26 AM
Anonymous... I thought I made myself clear in the original post... it would be working on a percentage system, where I say "ok, this action has a 37% chance of success" and I want the system to return true about 37 times when called 100 times. The only difference between what I want, and a standard pseudo random number generator, is that when I''ve called it 99 times and had 36 true results, I want a better than average chance of getting a 37th true result on that 100th call. Whereas a normal PRNG is stateless (as far as probabilities are concerned... I know they have internal state for their own purposes) and would just have a 37% chance of returning true on that 100th call, regardless of what had gone before.
#6 Moderators  Reputation: 3346
Posted 10 August 2001  09:35 AM
quote:
Original post by Grudzio
A small correction. Probability that in 10 flips you get 5 heads
is not 0.5 but smaller. To solve such problems (k succeses in N trials) you must use Bernoulli schema:
P(N,k)= N!/(k!*(Nk)!)*p^{k}*(1p)^{Nk}
N  number of trials, k  number of succeses
p  probability of success, P(N,k)  probability that in
N trials you have k succeses.
So P(10,5) = (10!/5!*5!)*1/1024 = 252/1024
Yes, of course. I only understood 1% of what you just said but I do know that what I said was wrong What I should have said was more like "However at the start, you had a higher chance of getting 5 heads than 6."
quote:
The only meaning of "stochastic" I know about are stochastic processes. In such case stochastic means random as opposite to deterministic. I dont think they can help you,but I can be wrong.
I read that stochastic processes were (sometimes) to do with some sort of probabilistic compensation for events that take a random amount of time and can only be measured once they complete. I figured this had a relevance to my system since you can''t know how many heads or tails you will have got until you flip it 9 times, however you know how many you want after 10 flips. But I know I''m largely stumbling in the dark here
#7 Members  Reputation: 232
Posted 10 August 2001  06:24 PM
Also, I''m not sure why you would have to add a fractional value to SUM instead of +1 or 1. Maybe you could explain what sorts of return values the function would be spitting out. If they are just true/false values, then this system would work (I may just not be explaining what I mean well enough...).
Hope this helps.
Mike Melson
#8 Moderators  Reputation: 3346
Posted 10 August 2001  10:17 PM
Scenario: I call RandPercent(99) 5 times. It returns true 5 times. If C was 5, then the next call to RandPercent(99) will return false. This is obviously erroneous behaviour. Scoring 5 trues in a row for a 99% query is less ''unbalanced'' than 5 trues in a row for a 50% (coin flipping) query. Therefore it should be ''tipping the scales'' far less. With me?
#9 Anonymous Poster_Anonymous Poster_* Guests  Reputation:
Posted 13 August 2001  03:30 AM
You''re chance is 37%. You generate a list of true/false values, and you set 37% of them to ''true.'' Then you permute the list randomly. (Fairly easy and cheap to do) Each call to get random returns the next value in the queue. When you hit the end, you have a choice of looping back to start or repermuting the list and starting over.
This also has the advantage of cutting down the number of potentially expensive calls to your system rand function, by essentially generating a look up table of random values.
Want more general percentages? Permute the numbers from 0 to 99, then treat chance < randvalue as a true. You''ll be true exactly chance out of 100 times.
#10 Anonymous Poster_Anonymous Poster_* Guests  Reputation:
Posted 13 August 2001  03:30 AM
You''re chance is 37%. You generate a list of true/false values, and you set 37% of them to ''true.'' Then you permute the list randomly. (Fairly easy and cheap to do) Each call to get random returns the next value in the queue. When you hit the end, you have a choice of looping back to start or repermuting the list and starting over.
This also has the advantage of cutting down the number of potentially expensive calls to your system rand function, by essentially generating a look up table of random values.
Want more general percentages? Permute the numbers from 0 to 99, then treat chance < randvalue as a true. You''ll be true exactly chance out of 100 times.
#12 Anonymous Poster_Anonymous Poster_* Guests  Reputation:
Posted 13 August 2001  08:00 AM
Nah. With a little extra book keeping work, you can just keep one state list, and all you need per player is a one or two byte index into the list describing their current position. You''re going to need that much state info per player no matter which scheme you use.
I''d probably set up a state list a few hundred bytes long and randomly scatter the players starting points on the list as they join the game, and never recalculate. I doubt they''d notice that there''s a pattern a few hundred positions long if they endlessly repeat the same action.
Actually, /I/ would probably just use rand, because I don''t care about the observed probablity matching to the expected one under any sort of time bound.
#13 Anonymous Poster_Anonymous Poster_* Guests  Reputation:
Posted 13 August 2001  08:04 AM
#14 Moderators  Reputation: 3346
Posted 13 August 2001  10:07 AM
#15 Members  Reputation: 122
Posted 17 August 2001  09:50 AM

Will this work?
~ Dragonus
If F = ma, and Work = Fd, does that mean that Work = mad?
Edited by  Dragonus on August 17, 2001 4:54:11 PM
#17 Moderators  Reputation: 3346
Posted 17 August 2001  11:11 AM
What all these solutions are missing out, is the point I made in the post beginning "The reason for the fractional values..." If you are only counting the difference between true and false returns, you are considering them to be equally weighted, which they are not. They can be weighted arbitrarily, depending on what kind of calls I am making to the generator. If I call RandPercent(90) and get 5 true answers, your system will make the 6th call have a (90 / (5 + 1)) == 15% chance of returning true. This is wrong: there should still be a >50% chance of returning true.
I figure that the algorithm is on the right lines, however. Instead of adding 1 to ''difference'' for every ''true'' and subtracting 1 for every false, I think I need to add different values that reflect the likeliness of the past calls. For example, when I call RandPercent(90) and get true, that other 10% needs carrying over. Basically, I add less than 1 each time. Perhaps I add 0.2, or something, since the ratio of 0.2 to 10% is the same as 1 to 50%. But I''m not sure. Nor would I know how to prove its ''correctness'' if I tried it.
Torn Space  no, it''s not random any more. But then, they were only pseudo random in the first place. I''m not really interested in how random they are, just that I get a stream of varying numbers which are biased towards ''proving the statistic'' (eg. 10% chance of success at a given activity) rather than each call being independent.
#18 Members  Reputation: 517
Posted 18 August 2001  07:58 PM
#19 Members  Reputation: 338
Posted 19 August 2001  09:26 AM
You''ve got to give a cap if you want the statistic proved to a much better degree. Perhaps a system size indicator would help, wherein small = 10 (the minimum for an integer percentage to be proven to the nearest 10th), medium = 100 (the minimum for an integer percentage proving), and large > 100 (can be proved to within one range level).
I have no idea what you would need this for, but I would suggest taking a long hard look at the design you''re using if the events you wish to bias are severe enough to undermine the average random number generator.
ld
#20 Members  Reputation: 338
Posted 19 August 2001  11:06 AM
int NewRand(int pctg)
{
static int trues = 0; //total #trues
static int totalcalls = 0; // total # function calls
int newChance; // adjusted value of percent chance
int result;
// adjust the percentage to compensate for the difference
// between actual and desired. As the difference approaches
// the order of the percentage itself, the system approaches
// 0% chance of returning a true.
// Might as well increase the # fnc calls here, as well
newChance = pctg  ((int)((float)trues/(float)(++totalcalls))  pctg))
// constrain the chance to 0 thru 100, obviously
if (newChance < 0) newChance = 0;
else if (newChance > 100) newChance = 100;
// GenRand is a generic PNRG on a percentage system
result = GenRand(newChance);
// If we got a true record it
if (result) ++trues;
return result;
}
This is pretty close to what Mike Melson wrote earlier, but I came up with it independently and figured I'd put it down to make sure we're on the same page.
ld
Edited by  liquiddark on August 19, 2001 6:09:16 PM