Links to Pseudo-random Theory?

Started by
49 comments, last by taby 11 years, 11 months ago
1) If you pick a random point in a 2x2 square, the probability of the point being inside the unit circle is p = pi/4 ~= 0.785398163 . If you now sample this n times, and count how many points are inside the unit circle, you'll get a probability distribution whose mean is p*n and whose variance is p*(1-p)*n. If n is large, this distribution is well approximated by a normal distribution with that mean and variance. So you can compute your number, subtract the mean, divide by the square root of the variance and the result should follow a standard normal distribution pretty closely.

2) In the procedure described above, each sample will give you the most information when p is close to 0.5. If you use a 3-dimensional sphere, p = 4/3*pi/8 ~= 0.523598776 , so things should work better than for a 2-dimensional sphere. Above that, p gets small quickly as a function of the dimensionality (0.308425138, 0.164493407, 0.0807455122...).
Advertisement
One comment about testing for randomness: A quick and dirty test would be to try to compress (e.g. gzip) a large chunk of binary data generated by your PRNG and see if the compressed file is any smaller than the original. If it is, there is a good chance the PRNG is flawed (because the compressor found some structure in the data to exploit).
Right I've been reading about some of these tests in some PDF on one of the official cryptography standards (not sure it's not the only one, just know that it is one). I'm really starting to see the benefit of analyzing the 1's and 0's and not using weird birthday tests or pi tests or such things. I still kind of wanted to try my hand at it though lol. ;]

The math really is above me, I never learned what ? or ? mean, and I'm really not used to reading it. The worst part is that when people tell me what formula to use they hardly ever explain how they derived that formula, they just expect me to take it on faith lol. And when they do it's all in technical mathy terms that I have no way of understanding in my current position. I'm getting it though, it's rather slow progress but I'm getting it.

I don't intend to give up until I've learned more about randomness. It's kind of a new found passion of mine.


As always, thanks guys. I really appreciate it.
The math really is above me, I never learned what ? or ? mean, and I'm really not used to reading it. The worst part is that when people tell me what formula to use they hardly ever explain how they derived that formula, they just expect me to take it on faith lol. And when they do it's all in technical mathy terms that I have no way of understanding in my current position. I'm getting it though, it's rather slow progress but I'm getting it.[/quote]
Ah, yes. Well if you have no notion of calculus it'll be hard to understand, but if you do it does make sense intuitively, because basically, when you want to find the mean of a sequence of numbers, you add them up, then divide the total by the number of elements, right? Well what if there are in fact infinitely many possible real numbers between 0 and 1 that the variable can take? In that case instead of adding them all up manually (which isn't possible) you integrate over the whole domain (which gives the sum of all those possible variables = the area under the curve) and divide the result by the continuous analogue of the "number of elements in the sequence", which is a construct which basically tells you what is the probability that the random variable of interest takes on a given continuous value (for a uniform distribution, the latter is in fact constant, which helps). Now what if you have a quantity derived from two independent random variables? In that case you integrate twice, once for each variable. And to obtain the variance it's the same, just using a different formula.

But Alvaro's explanation is better than mine (especially the comment on compressing the algorithm's output).

Right I've been reading about some of these tests in some PDF on one of the official cryptography standards (not sure it's not the only one, just know that it is one). I'm really starting to see the benefit of analyzing the 1's and 0's and not using weird birthday tests or pi tests or such things. I still kind of wanted to try my hand at it though lol. ;][/quote]
Indeed, all possible randomness tests are a subset of one very particular statement, which asserts that if a sequence is truly random, it is unconditionally impossible to predict the next bit output with probability better than 0.5. For cryptography, we often don't get true random numbers, so for pseudorandom numbers there is a weaker (but still useful) formulation of the statement: if a sequence is output from a strong cryptographic pseudorandom number generator, it should be impossible to predict the next bit output with probability better than 0.5 given all previous outputs, in reasonable time ("reasonable" means beyond the computational reach of any attacker). If a PRNG satisfies this property, it will pass *all* other randomness tests.

But birthday tests/etc.. are still very useful because testing the above usually requires some really nasty math (involving cryptanalysis), which is beyond me anyhow! And note that many good non-cryptographic PRNG's definitely don't satisfy the above property (mersenne twister, etc...), yet are used, because it's not an if-and-only condition: it's possible to pass many small tests and fail the cryptographic one, but it is not possible to pass the cryptographic test and subsequently fail any other. The cryptographic property is overkill anyway in most cases not involving crypto.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”


Right I've been reading about some of these tests in some PDF on one of the official cryptography standards (not sure it's not the only one, just know that it is one). I'm really starting to see the benefit of analyzing the 1's and 0's and not using weird birthday tests or pi tests or such things. I still kind of wanted to try my hand at it though lol. ;]

The math really is above me, I never learned what ? or ? mean, and I'm really not used to reading it. The worst part is that when people tell me what formula to use they hardly ever explain how they derived that formula, they just expect me to take it on faith lol. And when they do it's all in technical mathy terms that I have no way of understanding in my current position. I'm getting it though, it's rather slow progress but I'm getting it.

I don't intend to give up until I've learned more about randomness. It's kind of a new found passion of mine.


As always, thanks guys. I really appreciate it.


The US National Institute of Standards and Technology (NIST) has a suite of tests that are also popular:

http://csrc.nist.gov.../rng/index.html

Some brief descriptions of the tests are at:

http://csrc.nist.gov...tats_tests.html

The one test that seems closer than most to the compression-based test that alvaro suggested is the "Approximate Entropy Test". The basic idea is to take your bitstream and break it into m-bit chunks, where m is 1, 2, etc. and see if there is an imbalance in how frequent each chunk is. For instance, take the bitstream 01010101. If you break it into m=1 bit chunks, you'll find that the chunk 0 and the chunk 1 both occur with the same frequency, so there is no imbalance. Awesome. Next, break it into m=2 bit chunks. You'll notice right away that the chunk 01 occurs four times, but not once did the chunk 00, 10, or 11 occur. Not awesome. That's a big imbalance, and obviously the bitstream is nowhere near random. Of course, your bitstream would contain more than just 8 bits, and your m-bit chunks would get larger than just m=2.

Essentially, a greater entropy means a greater balance in the frequency of symbols and a greater incompressibility. Get your toes wet with the ? (summation) symbol by getting to understand how to calculate the entropy of a bunch of symbols: http://en.wikipedia....eory)#Rationale

Here's the equation for Shannon entropy in natural units. Don't think too hard about it until after you read the next few paragraphs:
[eqn]
S = - \sum_{i=1}^n p_i \ln p_i,
[/eqn]
where [eqn]\ln[/eqn] is the natural logarithm (ie. log() in C++).

It's pretty straightforward. Say you have a group of 6 symbols altogether. They could be binary digits, decimal digits, characters, words, images, paragraphs, etc., but we'll use characters: 'a', 'b', 'c', 'd', 'e', 'a'.

In the end you only have n = 5 distinct symbols; the letter 'a' is repeated once. In terms of probability, the letter 'a' occurs [eqn]p_1 = 2/6[/eqn]th of the time, 'b' occurs [eqn]p_2 = 1/6[/eqn] th of the time, 'c' occurs [eqn]p_3 = 1/6[/eqn] th, 'd' occurs [eqn]p_4 = 1/6[/eqn] th, and 'e' occurs [eqn]p_5 = 1/6[/eqn] th of the time. Just so you know, these probabilities for all of these 5 distinct symbols should add up to 1:
[eqn]
1 = \sum_{i=1}^n p_i
[/eqn]
In expanded form, this is:
[eqn]
1 = p_1 + p_2 + p_3 + p_4 + p_5.
[/eqn]
Now that you have a basic idea of what ? (summation) means, let's calculate the entropy. In expanded form, it's:
[eqn]
S = - (p_1 \ln p_1 + p_2 \ln p_2 + p_3 \ln p_3 + p_4 \ln p_4 + p_5 \ln p_5)
[/eqn]
[eqn]
S = - (2/6 \ln 2/6 + 1/6 \ln 1/6 + 1/6 \ln 1/6 + 1/6 \ln 1/6 + 1/6 \ln 1/6)
[/eqn]
[eqn]
S - [-0.366204 + 4*(-0.298627)]
[/eqn]
[eqn]
S = -[-1.56071]
[/eqn]
[eqn]
S = 1.56071
[/eqn]
You can divide this number 1.56071 by [eqn]\ln 2 = 0.693147[/eqn] to get the entropy in binary units: 2.25. This tells you that these 5 distinct symbols (from a set of 6 symbols total) would each require roughly 2.25 bits of information to encode, on average. It's worth noting that the total number of data (6) and the distinct number of data (5) are whole numbers, but the average information content per datum (1.56, or 2.25 in binary units) is not. Yes indeed, data are not at all the same thing as information, and the popular concept of "a piece/indivisible unit of information" is not generally valid. I just find these confusions to be so very often perpetuated by math/comp sci/physics experts, and it annoys me a little bit.

Try the group of 8 symbols '0', '1', '2', '3', '4', '5', '6', '7'. Note that there are 8 symbols and all are distinct. You should get a natural entropy of [eqn]S = \ln 8[/eqn]. Divide that by [eqn]\ln 2[/eqn] to get the binary entropy of 3. This basically says that you need 3 bits of information to encode 8 distinct (and equally frequent) symbols. If you understand that an n-bit integer can represent 2^n symbols, then this should not be a shock to you (ie. a 16-bit integer can represent 2^16 = 65536 distinct values, which corresponds to a binary entropy of 16). Yes, the binary information content per symbol can be a whole number, but this is only a special case. Just remember that data are not the same thing as information.

Now try the group of 8 symbols '0', '0', '0', '0', '0', '0', '0', '0'. The natural entropy is simpy [eqn]S = \ln 1 = 0[/eqn]. No need for distinctness, no need for information. Yes, the binary information content per symbol can be a whole number, but again, this is only a special case. Just remember that data are not the same thing as information.

Essentially entropy is information, and it is a measurement taken of data. It's definitely not the same thing as data. I mean, the length of your leg is obviously not the very same thing as your leg (one's a measurement, one's bone and flesh).

Also, can you see how the group '0', '0', '0', '0', '0', '0', '0', '0' would be greatly compressible compared to the group '0', '1', '2', '3', '4', '5', '6', '7'? If not, you should look up run-length encoding (a type of compression) and then dictionary-based compression. Remember: very high entropy data is largely incompressible. In the context of that m-bit chunk test, we'd also cut the set of 8 symbols into subsets that are larger than one digit each (ie. not just subsets of single digits like '0', '1', '2', '3', '4', '5', '6', '7', but also subsets like '01', '23', '45', '67', etc.), so there would be multiple measures of entropy. I hope that isn't too confusing. If it is too confusing, then just ignore this paragraph altogether.

Have you not done any calculus at all? This is where you will see ? (indefinite integral / antiderivative / opposite of local slope, definite integral / area / volume / hypervolume / anything else that can be added up using infinitesimally small chunks) pop up a lot. You owe it to yourself to get informed. Upgrading your knowledge from algebra to algebra+calculus is kind of like upgrading from arithmetic to arithmetic+algebra. It's a very powerful upgrade, and there are a bajillion ways to apply that knowledge.
Got quite a few positive ratings for that post, so here's a juicy C++ code that uses map to calculate the entropy of a string of characters. You can use it to calculate the entropy of "abcdea" and "01234567" just as well as "hello world":


#include <map>
#include <string>
#include <iostream>
using namespace std;

int main(void)
{
string stuff = "hello world";
map<char, size_t> symbols;

// For each symbol (character) in the symbol stream (string)...
for(string::const_iterator i = stuff.begin(); i != stuff.end(); i++)
{
// Search for symbol in map.
if(symbols.find(*i) == symbols.end())
{
// If it doesn't exist, insert and give it a count of 1.
symbols[*i] = 1;
}
else
{
// If it does exist, increment its count.
symbols[*i]++;
}
}

float entropy = 0;

cout << "Input string: \"" << stuff << "\" contains " << stuff.length() << " symbols, of which " << symbols.size() << " are distinct.\n" << endl;

for(map<char, size_t>::const_iterator i = symbols.begin(); i != symbols.end(); i++)
{
float p_i = static_cast<float>(i->second) / static_cast<float>(stuff.length());

cout << "Symbol '" << i->first << "' occurred " << i->second << '/' << stuff.length() << " = " << p_i << "th of the time." << endl;

entropy += p_i * log(p_i);
}

// Don't bother negating if already 0.
if(0 != entropy)
entropy = -entropy;

cout << "\nEntropy: " << entropy << " (binary: " << entropy/log(2.0) << ')' << endl;

return 0;
}


This spits out...

Input string: "hello world" contains 11 symbols, of which 8 are distinct.

Symbol ' ' occurred 1/11 = 0.0909091th of the time.
Symbol 'd' occurred 1/11 = 0.0909091th of the time.
Symbol 'e' occurred 1/11 = 0.0909091th of the time.
Symbol 'h' occurred 1/11 = 0.0909091th of the time.
Symbol 'l' occurred 3/11 = 0.272727th of the time.
Symbol 'o' occurred 2/11 = 0.181818th of the time.
Symbol 'r' occurred 1/11 = 0.0909091th of the time.
Symbol 'w' occurred 1/11 = 0.0909091th of the time.

Entropy: 1.97225 (binary: 2.84535)
Here's a simple introduction to some of the major words behind calculus (derivative, integral), but it's not very general, so expect to learn more on your own:

Consider the extremely simple function [eqn]f(x) = x[/eqn]. The function is a straight line. Go to wolframalpha.com and type in "plot x from 0 to 1" to see it.

The derivative of this function is [eqn]f^\prime(x) = \frac{df(x)}{dx} = 1[/eqn] (ie. the change in [eqn]f(x)[/eqn] with respect to the change in x). Go to wolframalpha.com and type "derivative x" to see it. It may be helpful to think of the derivative as being somewhat analogous to the slope, because they do measure essentially the same thing. The difference is that slope is rise over run, but the derivative technically does not have a run -- it's calculated for an infinitesimally small run, which is effectively no run at all. Of course, keep in mind that functions are generally not as simple as this [eqn]f(x)[/eqn] and so derivatives are not as simple as this [eqn]f^\prime(x)[/eqn], but hopefully you get the idea that derivative is related to the idea of slope.

The indefinite integral of [eqn]f(x)[/eqn] is [eqn]\int f(x) dx = F(x) = \frac{x^2}{2}[/eqn]. Go to wolframalpha.com and type "integral x" to see it. Note that the derivative of [eqn]\frac{x^2}{2}[/eqn] is x, so you can say that the indefinite integral is the antiderivative.

The definite integral adds bounds to the indefinite integral (ie. it defines bounds, which is why it's called definite) and can be used to calculate the area underneath the line drawn by [eqn]f(x)[/eqn]. For instance, the area under the line in the interval between x = 0 to x = 1 is given by the definite integral [eqn]\int_0^1 f(x) dx = F(1) - F(0) = \frac{1^2}{2} - \frac{0}{2} = 1/2[/eqn]. Go to wolframalpha.com and type "integral x from 0 to 1". Consider that if you take a square that is 1x1 and cut it in half along the diagonal, you get a right angle triangle of area 1/2. Basically the definite integral adds up an infinite number of infinitesimally small "rectangular" regions that stretch from y = 0 to y = x in the interval between x = 0 and x = 1. Look at the concept of Reimann sum, but imagine that the rectangles are so thin (in terms of width) that they are in fact just lines. Consider that the word integrate means to add together. The definite integral is basically fancy summing. This was mentioned in another post above.

After you've got your toes wet with this, you'll need to backtrack and see how you can go from classic slope (a large run) to derivative (an infinitesimally small run) via the concept of limits. You may wish to visit the concept of secant line before you tackle limits -- basically the derivative is analogous to the slope that you'd get by making a fancy secant line where the separation between the two points (ie. the run) is infinitesimally small. At the core of it all is the fundamental theorem of calculus. You'll need to learn your differentiation rules. When you study the very simple power rule, you'll see how the derivative (anti indefinite integral, if you like) of [eqn]\frac{x^2}{2}[/eqn] is x, and how the derivative of x is 1 (it's a special case), which is precisely what we covered here in the past few paragraphs. I really did use the most extremely simple example for f(x) that still has a non-zero derivative. Consider f(x) = 1. The derivative of that is just zero (ie. no change in f(x) with respect to the change in x), which would have been too simple to be useful as example.

Now try using something more complicated for [eqn]f(x)[/eqn], like [eqn]f(x) = \cos(x)[/eqn] where [eqn]f^\prime(x) = -\sin(x)[/eqn] and [eqn]F(x) = \sin(x)[/eqn].

I learned from The Complete Idiot's Guide to Calculus, and it worked very well for me, so don't be too proud when picking out training material. Of course, by now you probably have got at least one point hammered down: it's all about making measurements using infinitesimally small things. It also helps that there are things like wolframalpha.com (based on Mathematica) to help check your work, as well as other relatively dirt cheap offline CAS apps for most smartphones too (MathStudio is a really decent one). Sage is pretty amazing if you want a free CAS for your PC. There are also free Sage sites that let you do your work using their installation -- no hassle on your part, really, and you still get to save and reload long workbooks for long complicated jobs.

Now remember your mention of the term "Monte Carlo". I'm sure you know this already, but the origin of the term is "Monte Carlo integration". For those not familiar with this, say you want to find the area under f(x) = x in the interval x = 0 to x = 1, but you don't want to go through the bother of working out the definite integral, etc. Well, randomly place n points in the 1x1 square region bound by x = 0 to x = 1, y = 0 to y = 1. Now, assign each point an area of A = 1x1/n = 1/n. Finally, count how many points lie on or below the line drawn by f(x) = x and multiply that count by A. You should get a final answer of roughly 1/2. Hence, Monte Carlo (random, as in like, a casino game) integration (adding). Try it for something much more complicated than [eqn]f(x) = x[/eqn]. Also, consider that sometimes you have a plot of points but you don't actually have the f(x) that was used to generate them. You can't get the definite integral if you don't know f(x), so Monte Carlo integration (or Reimann sum) can help you if you need to know the area under the curve in that case. Plus, there are some f(x) that just simply do not have a definite integral to begin with, so you are absolutely forced to use numerical methods like Monte Carlo (or Reimann sum) for these. I learned about this (and about Towers of Hanoi, and, and, and...) in a wicked awesome book called "C for Scientists and Engineers" by Johnsonbaugh and Kalin. I notice that Amazon.com only gives this book a 4.5 out of 5. That's utterly ridiculous. The rating should be 5 out of 5, no questions asked.

Ah, yes. Well if you have no notion of calculus it'll be hard to understand, but if you do it does make sense intuitively, because basically, when you want to find the mean of a sequence of numbers, you add them up, then divide the total by the number of elements, right? Well what if there are in fact infinitely many possible real numbers between 0 and 1 that the variable can take? In that case instead of adding them all up manually (which isn't possible) you [color=#ff0000]integrate over the whole [color=#ff0000]domain (which gives the sum of all those possible variables = the area under the [color=#ff0000]curve) and divide the result by the [color=#ff0000]continuous analogue of the "[color=#ff0000]number of elements in the sequence", which is a [color=#ff0000]construct which basically tells you what is the probability that the random variable of interest takes on a given [color=#ff0000]continuous value (for a uniform distribution, [color=#ff0000]the latter is in fact constant, which helps). Now what if you have a quantity derived from two independent random variables? In that case you integrate twice, once for each variable. And to obtain the variance it's the same, just using a different formula.


I [color=#ff0000]almost followed you, Bacterius. ;D



I think I do get SOME of the standard tests in a layman's kind of way. In all of them you're looking for two things - is the normal curve (which I know from rolling an infinite number of dice) adequately represented by the PRNG within some margin of error, and do the extremes of the normal curve occur more often than expected (like ever in a short period?).

So you basically pretend that the result is inherently random, and check for the odds of getting the result you got.
If you didn't get a result you should get 99% of the time, like a 1 ever, then it's 99% probably not random.
(same thing as above: If you did get a result you shouldn't get 99% of the time, like all 0's, then it's 99% probably not random.)
Otherwise, it cannot be proven not to be random (null hypothesis i think). You can only prove that it probably isn't random.

The key is that there's no way to know, only a way to say that a PRNG is 99% probably random, even though it absolutely is not.
I just love that thought process. I'm having a lot of fun. biggrin.png


So anyway for a basic PRNG you're just trying to make sure no one can intuit patterns by watching the results. For a Crypt-PRNG you want to make sure a computer can't detect patterns in a reasonable amount of computation.
The trick is to find all of the obvious patterns (which could still potentially occur in randomness) and eliminate them.

At least, that's how it goes for randomness. (At least I think that's how it goes!) There may be other factors a client is more interested in, like me, I'm somewhat interested in finding QRNGs which have more equally distributed but still unpredictable points. So I guess the more Q in the RNG, the more you can guess where a random number won't appear. ;]

quasirandom1.jpg

The US National Institute of Standards and Technology (NIST) has a suite of tests that are also popular:

http://csrc.nist.gov.../rng/index.html

Some brief descriptions of the tests are at:

http://csrc.nist.gov...tats_tests.html


Yep, that's the one I found earlier. Thanks for the link though, I had misplaced it recently.




..."Approximate Entropy Test". The basic idea is to take your bitstream and break it into m-bit chunks, where m is 1, 2, etc. and see if there is an imbalance in how frequent each chunk is. For instance, take the bitstream 01010101. If you break it into m=1 bit chunks, you'll find that the chunk 0 and the chunk 1 both occur with the same frequency, so there is no imbalance. Awesome. Next, break it into m=2 bit chunks. You'll notice right away that the chunk 01 occurs four times, but not once did the chunk 00, 10, or 11 occur. Not awesome. That's a big imbalance, and obviously the bitstream is nowhere near random. Of course, your bitstream would contain more than just 8 bits, and your m-bit chunks would get larger than just m=2.


Hey wouldn't {0000 through 1111} automatically contain every combination of {00 through 11} exactly twice?
I mean if you didn't overlap the bits.



Essentially, a greater entropy means a greater balance in the frequency of symbols and a greater incompressibility. Get your toes wet with the ? (summation) symbol by getting to understand how to calculate the entropy of a bunch of symbols: http://en.wikipedia....eory)#Rationale

Here's the equation for Shannon entropy in natural units. Don't think too hard about it until after you read the next few paragraphs:


Thank you for the effort here sir! Gonna need a while to mull this over though, but seriously, thank you.
If you get how the standard normal distribution applies to random variables, and that it has a definite integral from -infinity to infinity of 1, then perhaps you know more than you're letting on (or more than you consciously realize)? ;)

Perhaps the following is also a way to look for randomness, in the continuous case, but who knows if I'm right: If you have a set S of n values from the continuous interval [min(S), max(S)], and there's no repetition in the values, and there's no repetitions in the first derivatives, and no repetitions in the second derivatives, etc, no repetitions in the (n - 1)th derivatives, then I guess it's maximally random because nothing was repetitious at all. This is to say that the entropy of S would be [eqn]\ln n[/eqn], the entropy of the set of first derivatives would be [eqn]\ln(n - 1)[/eqn], etc. Of course, you're looking at storing/working on an upper triangular matrix of [eqn](n^2 + n)/2[/eqn] non-null elements (containing eventually ubergargantuan numbers for the highest derivatives), which could get pretty nasty for very large n.

Some example matrices for some sets of n = 5 values would be something like:

Values: 0.2 0.4 0.7 0.1 0.5
1st derivatives: ... 0.2 0.3 -0.6 0.4
2nd derivatives: ... ... 0.1 -0.9 1.0
3rd derivatives: ... ... ... -1.0 1.9
4th derivatives: ... ... ... ... 2.9

Values: 0.2 0.4 0.7 0.1 0.3
1st derivatives: ... 0.2 0.3 -0.6 0.2 <-- repetition on this row!!!
2nd derivatives: ... ... 0.1 -0.9 0.8
3rd derivatives: ... ... ... -1.0 1.7
4th derivatives: ... ... ... ... 2.7

Values: 0.1 0.2 0.3 0.4 0.5
1st derivatives: ... 0.1 0.1 0.1 0.1 <-- repetition on this row!!!
2nd derivatives: ... ... 0.0 0.0 0.0 <-- repetition on this row!!!
3rd derivatives: ... ... ... 0.0 0.0 <-- repetition on this row!!!
4th derivatives: ... ... ... ... 0.0


I guess if you wanted to get really hardcore, you could look for repetition not only in terms of each individual row, but in terms of the entire matrix. In this case, maximal randomness would only be achieved if the entropy of the non-null elements was [eqn]\ln[(n^2 + n)/2][/eqn]. In this case, the first example matrix would be slightly repetitious because 0.1, 0.2, and 0.4 all appear in multiple rows, and so the entropy / randomness would not be maximal.

Then again, perhaps these results could give something other than a normal distribution, yet still be considered "maximally random" by my definition, which would clearly be an issue for those who rely on the normal distribution method. So, I'm probably entirely wrong from the start, but I thought that I'd put it out there anyway.

This topic is closed to new replies.

Advertisement