Maxwell-Boltzmann Distribution
I'm toying with a particle engine modeled after the ideal gas law and the kinetic molecular theory (aka kinetic theory of gases). The basic gist of the part of the theory that I'm interested in is that as the absolute temperature of a system increases so too will the velocities of the particles in the system. Drilling into the theory, however, it turns out that the particles in such a system have a range of velocities associated with the temperature that can be described by the Maxwell–Boltzmann distribution.
Rather than simply assign the most probable speed for a given temperature to each of the particles in my engine, I'd like to distribute a speed to each particle using the Maxwell–Boltzmann distribution (see equation 10 in the above link). I'm not completely certain how to do this, so I thought I'd ask here for tips.
Has anyone here done this before?
I'm thinking that velocity is dependent on temperature, yet the probability equation (10) appears to posit both v and T as independent variables. I don't want to get into math that is more complicated than these equations, so resolving the dependency relation between v and T is probably too much. I'm thinking that if I hold T constant and then iterate over the range of speeds, at each step on the scale of speed, I'll get the number of particles that should be assigned that particular speed. And then if I change T, I'll need to repeat this distribution. Is this the way to do it or is there a better way?
I’ve never done something like that but it looks quite hard to sample from that distribution. But if you need to set the velocity of the particles why are you calculating the module of the velocity when you can calculate the components with the equation 9? Each component is independent and it’s distributed by a normal distribution.
You can get a random variable with normal distribution by summing many random variables of equal distributions. The variance of the sum will be the variance of the uniform distribution divided by the number of terms in the sum.
Hmm... so if you add n uniformly distributed variables between -a*sqrt(3*n) and +a*sqrt(3*n) you should get a normal distributed random variable with the variance a^2.
In c it might look something like this.
Do this for each axis and you're done. What n to use depends on performance but already at 10 you'll have a pretty good normal distributed random variable.
Hmm... so if you add n uniformly distributed variables between -a*sqrt(3*n) and +a*sqrt(3*n) you should get a normal distributed random variable with the variance a^2.
In c it might look something like this.
velx = 0;for( i=0;i<n;i++ ) { velx += (random() - 0.5)*2*a*sqrt(3*n);}
Do this for each axis and you're done. What n to use depends on performance but already at 10 you'll have a pretty good normal distributed random variable.
I vaguely remember that normal variate generation trick from a simulations course I took in school, but is the Maxwell-Boltzmann distribution technically a normal distribution with the factor of 4πv2 in front, even though the values may be normally distributed? Is the product of three normal distributions also a normal distribution?
Quote:Original post by apatriarca
I’ve never done something like that but it looks quite hard to sample from that distribution. But if you need to set the velocity of the particles why are you calculating the module of the velocity when you can calculate the components with the equation 9? Each component is independent and it’s distributed by a normal distribution.
I'm more interested in the speeds of the molecules rather than their component velocities, but equation 9 might come in handy just the same. Equation 10 isn't much different and most of the factors remain constant. Only v and T change.
My physics textbook (Halliday/Resnick) presents equation 10 with a slight difference:
N(v) = 4*pi*N * (m/2*pi*k*T)^3/2 * v^2*exp(-m*v^2/2*k*T)
Where N(v)dv is the number of molecules in the gas sample having speeds between v and v+dv, T is absolute temperature (Kelvin), k is the Boltzmann constant, m is the mass of a molecule and N is the total number of molecules in the sample.
So plug in v and T and out comes the number of molecules with speed v.
I'm interested in the speed distribution because it depends only on the temperature. Velocity has a vector component that changes due to collisions. I think the normal distribution that LockeCole presents will work for setting up the initial velocity unit vectors.
Considering just a single particle that you want to assign a speed, so that the speed distribution of all particles corresponds to the distribution shown on the wiki page. Choose the curve that corresponds to your tempature and molecule type in the following picture, then throw a dart at the picture. If the dart lands below the curve, then give it the speed corresponding to where the dart landed (the x position). If it lands above the curve, then throw another dart until eventually one lands below the curve.
With a given T, the probability that a given molecules has speed between V and V+dv (where dt is very small) is
f(V).dv / integral(f(v)).
Integral (f(v)) is included for normalization so that the sum of the probabilities equal to 1. You realize that once you find that integral for a given T over the all possible v distribution (look at the graph for an idea on bounds but in theory it should be from -infinity to infinity) it will remain the same for all v because it will be a constant, since integral. So the form you found in the book, if it is different from wikipedia and claims that when only multplied by dv gives the probability of finding that molecule in f(V)* then that distribution is probably a normalized form. That is my opinion. I would check that from my book if only I hadnt lended it to someone else.
I dont know how fastly computers generate random numbers but as maze said you can generate a random number with in a x,y range until it falls under the curve (which you check from your equation ofcourse) and then assign the x value as the speed to the molecule. I dont know how much time that would take but if you want to have an accuracy of lets say at an order of 10^-1 m/s then you will have to divide each 1 unit in your x and y into 10 units.
* non-strictly speaking since in continous probability distributions an observable can only be in a range of values, it can not take an exact value
f(V).dv / integral(f(v)).
Integral (f(v)) is included for normalization so that the sum of the probabilities equal to 1. You realize that once you find that integral for a given T over the all possible v distribution (look at the graph for an idea on bounds but in theory it should be from -infinity to infinity) it will remain the same for all v because it will be a constant, since integral. So the form you found in the book, if it is different from wikipedia and claims that when only multplied by dv gives the probability of finding that molecule in f(V)* then that distribution is probably a normalized form. That is my opinion. I would check that from my book if only I hadnt lended it to someone else.
I dont know how fastly computers generate random numbers but as maze said you can generate a random number with in a x,y range until it falls under the curve (which you check from your equation ofcourse) and then assign the x value as the speed to the molecule. I dont know how much time that would take but if you want to have an accuracy of lets say at an order of 10^-1 m/s then you will have to divide each 1 unit in your x and y into 10 units.
* non-strictly speaking since in continous probability distributions an observable can only be in a range of values, it can not take an exact value
Well, i dont mean literally throw a dart. Its more of an analogy (-:
For programming purposes you would pick a random point in 2D, and check if the y coordinate is less than that ugly equation on wikipedia.
By the way, this is way overkill - LockeColes method will work great of course. But if you have more complicated probability distribution functions in the future (not just simple gaussian), you will know what to do!
[Edited by - Maze Master on July 22, 2008 4:40:07 PM]
For programming purposes you would pick a random point in 2D, and check if the y coordinate is less than that ugly equation on wikipedia.
By the way, this is way overkill - LockeColes method will work great of course. But if you have more complicated probability distribution functions in the future (not just simple gaussian), you will know what to do!
[Edited by - Maze Master on July 22, 2008 4:40:07 PM]
Quote:Original post by iSina
With a given T, the probability that a given molecules has speed between V and V+dv (where dt is very small) is
f(V).dv / integral(f(v)).
Integral (f(v)) is included for normalization so that the sum of the probabilities equal to 1. You realize that once you find that integral for a given T over the all possible v distribution (look at the graph for an idea on bounds but in theory it should be from -infinity to infinity) it will remain the same for all v because it will be a constant, since integral. So the form you found in the book, if it is different from wikipedia and claims that when only multplied by dv gives the probability of finding that molecule in f(V)* then that distribution is probably a normalized form. That is my opinion. I would check that from my book if only I hadnt lended it to someone else.
I dont know how fastly computers generate random numbers but as maze said you can generate a random number with in a x,y range until it falls under the curve (which you check from your equation ofcourse) and then assign the x value as the speed to the molecule. I dont know how much time that would take but if you want to have an accuracy of lets say at an order of 10^-1 m/s then you will have to divide each 1 unit in your x and y into 10 units.
* non-strictly speaking since in continous probability distributions an observable can only be in a range of values, it can not take an exact value
The form in my textbook is different from equation 10 in the wikipedia entry by a factor of N, where N is the number of molecules in the system. That tells me that equation 10 returns the probability factor, so the integral of equation 10, from 0 to infinity, returns the sum of the probabilities which must equal 1. The lower bound is 0 because the independent variable is absolute temperature which never goes below 0. To quote from my textbook: "The distribution curve is not symmetrical about the most probable speed because the lowest speed must be zero, whereas there is no classical limit to the upper speed a molecule can attain." The curve used in my textbook is similar to the graph that Maze Master posted - except it only pertains to oxygen and the two curves represent different temperatures rather than different gases at 25'C.
For my purposes, this means that I don't need to throw darts (literally or figuratively [smile]), because applying the equation tells me how many molecules (particles) in the system (assuming for now that the system only contains one kind of molecule), to assign the speed used in the equation. The continuity of the curve and the discontinuous nature of plotting the curve using discrete intervals means that there will be some slop in my results (truncation on the up slope, expansion on the down slope), but I don't think it will be significant. Although the upper bound is infinite, eventually the equation will return values less than 1 which I think will be a good ending condition for the loop (values less than one when multiplied by N, which will likely be a maximum of 10,000 particles).
To expand some more, my textbook says: "The number of molecules having a speed between v1 and v2 equals the area under the curve between the vertical lines at v1 and v2. The area under the speed distribution curve, which is the integral
N = [N(v)dv]0->inf, is equal to the total number of molecules in the sample. At any temperature the number of molecules in a given speed interval delta v increases as the speed increases up to a maximum (the most probable speed vp) and then decreases asymptotically toward zero. The distribution curve is not symmetrical about the most probable speed because the lowest speed must be zero, whereas there is no classical limit to the upper speed a molecule can attain."
My textbook goes on to derive the formulas for most probable speed, average speed and the root-mean-square speed. These are very simple and can be used to check the outputs of attempts to calculate the entire distribution.
The average speed: va = sqrt(8/pi)* sqrt(kT/m)
The most probable speed: vp = sqrt(2) * sqrt(kT/m)
The root-mean-square speed vrms: = sqrt(3) * sqrt(kT/m)
My textbook goes on to explain how each of these compare with the others and sums it up with the following assertion:
vp < va < vrms
I can use these values to determine the size of the speed step to use as well. For example, I could use a step = vp/(vrms-vp) or a step = vp/(va-vp)
I think for my purposes, I'll precalculate the probabilities at each speed step for a particular temperature and store that array into an array indexed by temperature that I can then use as a look up table when setting the speed of the particles in the system. It that turns out to be overkill I'm ok with it, like I said at the very beginning, I'm toying. [smile]
Thanks for all of the help.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement