# Devilogic

Member

58

246 Neutral

• Rank
Member
1. ## LU decomposition

Thank you for the explanation TheAdmiral :)
2. ## LU decomposition

Quote:Original post by TheAdmiral You're right, sorry - I didn't qualify this claim fully. I meant that using LU factorisation and back-substitution to solve a general system of linear equations is a bad idea. In the symmetric positive-definite case (as in a Cholesky factorisation), numerical instability is rarely a problem. I felt that this needed bringing up since LU factorisation is the first way everyone learns to solve a linear problem (as it's so simple), but not everybody is told that it is highly unstable even for many realistic problems. I agree with you that LU decomposition is (potentially very) unstable if you don't use any pivoting (you can not do that on any matrix though - it has to be a particularlly nice matrix for you to be able to avoid at least partial pivoting anyway :). Now, correct me if I'm wrong, but I was under the impression that LU decomposition is numerically stable with full pivoting (can't remember if partial pivoting is enough for stability here..).
3. ## Re-inventing the Wheel, or Mediocre Solutions to Already Solved Problems?

Quote:Original post by Vorpy This looks like a pretty good solution to me. The only other solution I've thought of for this sort of problem is using a linear array or list instead of a map, and storing the probability of events along with the events in the array. Then given the random number between 0 and totalWeight, iterate down the array until the corresponding event is found (adding up the probabilities of each event as they are iterated over, until the cumulative probability is greater than the random number). This is slower for selecting an event, since it is O(n) while the map solution is O(log(n)), but it can be O(1) to add, remove, or change the probability of an event, while the map does those operations in O(log(n)) time. A map also has higher overhead than an array. I would personally recommend using a linear array, in the way that Vorpy already suggested, the only thing I would have to add is that using a linear array can actually be O(log(n)) to find the event (the same as the map, and the same as the best-case for a binary tree) if you use bisection. Unfortunately you get O(n) performance for operations that modify the probabilities, but if that is rare it doesn't really matter. As in the original posted code for AddEvent you store the following pairs in the array: (cummulative weight, event ID). When it's time to pull up an event you do the following (sorry for the Pascal code): function GetRandomEvent(tHit: Float): Integer; var L, H: Integer; // the range of array indiced where our event might be M: Integer; // temporary index (mid-point between L and H) begin L := 1; // set L to the start of the array H := Length(WeightEventMap); // set H to the end of the array while (H-L)>1 do // iterate till we have found our (one) event begin M := (L+H) div 2; // find the middle point between L and H if (WeightEventMap[M].Weight=tHit) // are we extremely lucky (have we inadvertently stumbled accross the event that we are looking for)? then begin H := M; // guess we are lucky, set H (the return value) to M, Break; // and break out of the loop end; if (WeightEventMap[M].Weight<tHit) // compare the value at the mid-point (M) to the target value (tHit) then L := M // raise the lower bound (our event must have its index higher than M) else H := M; // lower the upper bound (our event must have its index lower than M) end; GetRandomEvent := WeightEventMap[H].EventID; // return the first event that has Weight>=tHit (that event has index H after the above loop was executed) end; As you can see, the above code is relatively simple (simpler than the tree balancing code you would need to guarantee O(log(n)) performance of the binary tree anyway), and has minimal overhead (no complex data structures, and no classes), while performing better than (or atleast as good as) either the map or the tree. However: Quote:Original post by Vorpy As long as the number of events is small, it doesn't really matter that much. This is also very true. If your number of events is small then it really doesn't matter if you just use simple linear search (like in the original code), since the overhead of anything else can end up being much greater than the time to execute the linear search! Find out your requirements first. In spite of the above, if it were me I would still use bisection for completeness sake, since unlike most solutions bisection has virtually no overhead.
4. ## Eigenvalues and Diagonalizable Matrices

Quote:Original post by Paradigm Shifter Further, because of the way the change of basis matrix is constructed which diagonalises A, all of A's eigenvalues are non-zero and distinct, since the columns of the change of basis matrix are the eigenvectors corresponding to the eigenvalues of A, and if any were zero or repeated then the change of basis matrix would be non-singular since the columns would be linearly dependent, and the change of basis matrix would be singular and hence non-invertible. Actually the eigenvalues can be zero, and need not be distinct for a matrix to be diagonalizable (remember, the matrix 0 is already diagonal in any basis!). 1. It's true that for a diagonalizable matrix no eigenvector can be 0, but its corresponding eigenvalue can be. Consider any matrix that represents a linear transformation with a non-trivial kernel - any vector from that kernel is an eigenvector for the eigenvalue 0. 2. Eigenvalues can be "repeated", in the sense that they appear more than once in the diagonalized matrix. What this means is that the eigenspace of that repeated eigenvalue (the space spanned by this eigenvalue's eigenvectors) has dimension more than 1 - specifically it has dimension equal to the number of times it appears in the diagonalized matrix. For example, consider a nxn identity matrix (in the n dimensional space V): this matrix is already diagonal and it has only one distinct eigenvalue (1), but this eigenvalue is repeated n times - meaning that the space spanned by vectors that this matrix "leaves alone" (multiplies by 1) is n dimensional - in other words, the whole space V. Hope this clears up some confusion.. P.S. The correct term for the number of times an eigenvalue is repeated (in the matrix) is "geometric multiplicity" of that eigenvalue.
5. ## Simple stick physics question :)

Quote:Original post by Numsgil If you wanted to solve this analytically, you'd set the point touching the ground as the fulcrum, and then your gravity on the center of mass becomes a torque causing force. It should be pretty easy from that to determine a path for the stick. Except that it isn't that easy, since you can't move the fulcrum to the contact point :). The problem is that the contact point is moving (in an accelerated fashion even) if the stick is on slippery ground (as the OP suggested) - your frame of reference becomes non-inertial and you would have to take into account system (fictious) forces - which gets pretty messy pretty quick (you get a very nasty looking second degree differential equation). A better approach would be to first notice a few things: 1. If there is no friction the contact force from the ground is always completely vertical, and 2. The "bottom" of the stick can't penetrate the ground, but there is no reason for it to lift off of the ground either - it always stays in contact with the ground, which means that it is only moving horizontally. From these two observations it follows: 1. Since there is no external force acting in the horizontal direction the center of mass of the stick moves only in the vertical direction (it doesn't move sideways!), and 2. Since the force of the ground on the contact point is always perpendicular to the direction of movement of the contact point, the force of the ground doesn't generate any work (the force of the ground doesn't change the total energy of the system=stick!). If we take into account these facts, then we can easily write: 1. If we write the height of the center of mass as "h", the length of the stick as "l" and the angle between the ground and stick as "Phi", then we can say: h=(l/2)*Sin(Phi), and 2. if we write the angle Phi at the start of the experiment (when the stick wasn't yet moving) as "Phi0", and the height "h" at that same time as "h0", the mass of the stick as "m", and it's moment of inertia around the center of mass as "J" (J=1/12*m*l2), the velocity of the center of mass as "v" and angular rotation velocity of the stick around its CoM as "Omega", and time as "t", we can say: Wk=ΔWp 1/2*m*v2+1/2*J*Omega2=m*g*(h0-h) ; v=(l/2)*Cos(Phi)*Omega Omega2*((l/2)2*Cos2(Phi)+J/m)=g*l*(Sin(Phi0)-Sin(Phi)) ; Omega=dPhi/dt (dPhi/dt)2*((l/2)2+Cos2(Phi)+J/m)/(Sin(Phi0)-Sin(Phi))=g*l Integrate[Sqrt(((l/2)2+Cos2(Phi)+J/m)/(Sin(Phi0)-Sin(Phi)))*dPhi, Phi0, Phi] = Sqrt(g*l)*t Unfortunately it turns out that the above definite integral is unsolvable (at least Mathematica can't solve it!) :(. But just as an exercise, if it were possible to solve it, you would then do the following to answer your question: 1. Express "Phi" as a function of "t" (you need to solve the above integral to do this), 2. Use h=(l/2)*Sin(Phi) to express "h" as a function of "t", 3. Differentiate "h" with respect to time "t" twice, to get the acceleration of the CoM "a", 4. Note the following relation (if we write the force of the ground as "Fn"): a=g-Fn/m Fn=m*(g-a) 5. To get the component of the force "Fn" acting in the direction of the stick ("Fshaft"), use the following: Fshaft=Fn*Cos(Phi) Since you have already expressed both "Fn" and "Phi" as functions of time "t", you have just found "Fshaft" at any time "t", and hence solved your problem (if you managed to solve the integral in step 1 of course ;) ). In conclusion: Your "simple" physics question turns out to be in fact quite far from being simple :). P.S. There might be other ways to solve this problem, but I really can't think of any way to get around solving either a nasty looking differential equation (system forces and other nasty things), or a "simple" one (one that you can turn into a definite integral) but which has no analytic solution. The problem is that the constraints are inherently position based (the bottom of the stick can't penetrate the ground), so you must at some point invoke the position ("Phi" or "h") but which as I have demonstrated you can't find analytically (though, as always, I could be wrong...). P.P.S. The "nasty" equation I was talking about is the following (define: Omega=dPhi/dt, Alpha=dOmega/dt=d2Phi/dt2): Alpha*(J-(l/2)*Sin2(Phi)) - (l/4)*Sin(2*Phi)*Omega2 - m*g*Cos(Phi) = 0 [Edited by - Devilogic on July 25, 2007 7:07:30 AM]
6. ## snow crystals

Here are some links to snowflake growth simulations using cellular automata (found using Google ;). Wolfram's Snowflake Automaton (a bit cartoonish): - http://demonstrations.wolfram.com/SnowflakeGrowth/ - http://www.vector.org.uk/archive/v193/coxe193.htm - Gravner-Griffeath Snowfakes (rather realistic): - http://psoup.math.wisc.edu/Snowfakes.htm - http://www.americanscientist.org/template/AssetDetail/assetid/54439/page/6;jsessionid=baa9...#54521
7. ## Who knows Pascal?

Double-quotes aren't used for anything in Pascal, actually. Strings aswell as characters all use single-quotes (unlike C which uses different tokens for the two). And yes, two consecutive single-quote characters written inside a string represent one single-quote character in the string - so the correct way to write: don't, in a string would be: 'dont'''t' Hope that helped :)
8. ## Black Holes

Quote:Original post by jovani I remind you that what Chandra limits states is that the critical value for a star to end its life as black hole is 1.4 solar masses, it does not say anything about more massive stars. Actually, what Chandra's limit states is that 1.4 solar masses is the maximum mass at which a white dwarf can exist. A white dwarf more massive than Chandra's limit will collapse under its own gravity (if it is non-rotating) since electron degeneracy pressure will not be able to hold it stable anymore. But such a white dwarf will NOT necessarily collapse into a black hole (as was thought before the discovery of neutron stars). It will "simply" become a neutron star. Now, neutron stars can be as massive as 3 solar masses before collapsing under their gravity (this time into a black hole). So in short, Chandra's limit determines the maximum mass of a white dwarf, NOT the mass needed to produce a black hole. Quote:Original post by jovani I read the TV news report and here is a quote from it. Quote: The study's co-author, Mark Sullivan of the University of Toronto, dubbed it a "rogue supernova." The team that found the supernova has two theories about how the white dwarf got so big before it exploded. One is that the star was spinning at such a high speed that gravity couldn't pull it in and crush it at the normal Chandrasekhar limit. Another theory is that the observed supernova is the result of two white dwarfs merging and that the two stars together exceeded the limit only briefly before exploding. It seems to me that what is said is that they found a supernovae that originated from a larger than 1.4 solar masses white dwarf, and they are given two possible reasons for it. Basically they think some internal forces are working to prevent the implosion into a supernovae when the dwarf was 1.4 solar masses... Exactly! Since Chandra's limit sets the upper limit for the mass of a white dwarf (the star that explodes as a Type Ia supernova), and the team reportedly found evidence of a Type Ia supernova that would have to have come from a white dwarf heavier than that, you are left with only two possible conclusions: 1. Either the Type Ia supernova explosion didn't come from a "normal" white dwarf (maybe from a rotating one, or from a collision between two smaller white dwarfs...), OR 2. There's something deeply wrong with the way the Chandra's limit was determined (need for new laws of physics, maybe?) Quote:Original post by jovani ... It does not say that they found a supernova that originated from a smaller than 1.4 solar masses white dwarfs, which is what would be requiered to prove your theory of the density as the reason for black holes formations. Actually, the "theory of density" is more or less proven (if you believe in General relativity, you believe in "the theory of density"). Basically, according to Generaly relativity any object smaller that the Schwarzschild radius associated with its mass, will inevitably collapse under its own gravity to form a black hole. Now, there is no absolute lower limit on the mass of the object for this to happen, necessarily! - the object, whatever its mass, just has to be compressed under its Schwarzschild radius and it will become a black hole (as big or as small as you'd like). True, the only known way for a star to be compressed by its own gravity into a black hole, is if its larger than 3 solar masses and it's already undergone a transition into a neutron star. But this does NOT mean, that you couldn't provide enough energy to compress a smaller object into a black hole via some other means than gravity! For example, in the early days of the universe there was an abundance of energy in a very confined space and it is speculated that some pockets of matter had to have been compressed by ambient radiation and by fluctuations in density enough to form small black holes (much smaller that 1 solar mass, even!). There's also speculation that IF gravity were to behave a certain way (the way string theorists would want it to) we might even be able to produce microscopic black holes in our own particle accelerators (and there's NO way you're packing the energy of 3 solar masses into a subatomic particle inside an accelerator! :). P.S. The only lower mass limit for a black hole to form is (probably) Planck's mass - and even that only because our laws of physics don't work at that scale, not because they say it would be impossible! [Edited by - Devilogic on November 2, 2006 6:13:32 PM]
9. ## Recursive-descent parsing: Handling left-associativity?

Usually a better way to parse expressions of this type is to use only 1 recursive call per operator level/precedence. Normally you handle all operators (not just 1) of the same level in a single parsing function, while accumulating the result of this level in the return variable. In practice this means that your function ParseExpression should look something like this: Expression ParseExpression() { Term tLeft = ParseTerm(); while (NextToken().Type == Plus) { tLeft = new Addition(tLeft, ParseTerm()); } return tLeft; }Notice that three things have changed: 1. The if-statement was replaced with a while-statement, 2. The call to Addition now uses ParseTerm as the second parameter, instead of ParseExpression (the function ParseExpression is now only called once per operator level, and the next level [ParseTerm] is invoked directly rather than through a recursive ParseExpression call), 3. The result is now accumulated in tLeft, which is returned from the function when there are no more operators of this level in the input string. If you do all your arithmetics like this you get left-associativity for free, while also reducing recursion depth and making the code more understandable. Btw, the grammar this is parsing is: Expression = Term {"+" Term}. P.S. I am not a C programmer so the code snippet I posted above might contain errors! [Edited by - Devilogic on September 28, 2006 6:36:57 AM]
10. ## DB Design: AVL and B-Trees

Well, usually databases are quite big and so can't be loaded into computer memory all at once, but rather have to be stored on some external storage device (HD drive or similar) and be loaded by parts. Since access to external storage is usually very expensive (=slow) it is very important for DB's performance that there are as few accesses to that storage as possible. Now, while searching a B-Tree may be a bit slower than searching an AVL tree if both are in stored in RAM memory, the opposite is true when you have to deal with slow external storage. Let's see why: Since an AVL tree is just a (balanced) binary tree, you have to make on average "log2 n" comparisons to find an element in the tree. Since the structure is such that you can't predict to which node a single comparison will send you to in advance, you have to access the external storage device "log2 n" times aswell. Now, in a B-Tree a single node may have up to k children instead of just 2, and therefore the depth of a balanced B-Tree is "logk n". Since you have to find the correct child node to go to when searching for a key, you usually have to make up to "log2 k" comparisons at each node you visit (if you use binary search). The overall comparison complexity of searching a B-Tree is therefore "logk n * log2 k" = "log2 n" (ie. the same as that of an AVL Tree). But this time it is not necessary to also make the same number of accesses to external storage. Since the "link-table" of a single node is usually written sequentially on the storage device, you only have to access external storage at unpredictable (different) places the number of times you access a unique node - and that is on average "logk n" times (=the depth of the tree). Since the binary search (or similar) through a single node's "link-table" in a B-Tree can be performed in memory, which is much faster than accessing a HD drive, the overall time complexity of the search operation becomes bounded only by external storage access. This in turn means that using a B-Tree instead of an AVL tree, when considering slow external storage, effectively reduces time complexity from "O(log2 n)" to "O(logk n)", where k can be chosen arbitrarily! Just to illustrate: Imagine you have find a certain string among n = 10.000 different keys that are stored in a DB. If you were to use a perfectly balanced binary tree to search for it you would have to access the database 13 times on average (an AVL tree can also produce approximately 1.44 times that, so around 19 times in the worst case). But if you were to use a B-Tree with, say, k = 26 (the number of letters in the alphabet, for example) you would only have to access the database 3 times! This means that access to this database using a B-Tree would be between 4 to 6 times faster than if you were to use an AVL tree. And the difference only grows with the number of keys (=n) and child nodes (=k). [Edited by - Devilogic on September 20, 2006 9:11:58 AM]
11. ## How to achieve an arc like movement?

Quote:Original post by ehudros Sorry to nag, but I really need the help... Is the law refered to by the admiral kepler's law? if so, I did not find any refernce to the formula he suggested and I can't seem to get it to work. Is K just some arbiterary magnitude? Also, I guess i should use kepler's law to crash the "planet" into the sun, but I cant figure out how that's done using the equations... please, someone? :) As far as I can tell the "law" is not physical (ie. it doesn't appear in physical systems, unless the system is complex - no fundamental force has such a profile). It's just a fictional force that's a bit "less strong" than the normal force of gravity/electrostatics (it falls off quicker). As far as using it as a force governing movement, I can't really predict what effect it would have, other than that: 1. an object under its influence would accelerate very quickly as it would approach the source, 2. it would be harder to achieve a stable orbit (the force strength changes quicker than in the case of a gravity-like force, some distance away from the source). Personally, I would go with JBourrie's idea of using a Bezier curve (a quadratic one would probably do in your case). You'd just have to make sure you move along the curve knowing the distance you've travelled, because when a Bezier curve is very curved the parameter 0<=t<=1 doesn't change as a function of distance (it goes "faster" around thight bends than it goes in straight parts).
12. ## Electrons

Quote:Original post by gparali You must have posted as i was writing. Anyway. I am not a physisist but from what i can tell the simulation is about electron cloud around a mote of dust. It is more complex than this but after this a cant quite follow what he tells me. I would appreciate it if you could post a link to this galaxy simulation you mention i Actually, I was thinking of something along the lines of what nmi posted above (who, coincidentally, also posted just as I was writing my reply...). While I don't have any direct experience with these things, I did find a nice overview article of different methods used to speed up simulation of such N-body systems - you can find it here (look for the section "Calculating Force"). Along the descriptions it contains a lot of references to various resources, so if you think one of those methods would be feasible to use in your case, you can use those resources to work out the precise details/avoid common pitfalls... That's about as much as I can help you though, I guess. :) Cheers!
13. ## Electrons

Is your brother trying to simulate a free electron cloud, or something more complicated? If it's an unbounded electron cloud then the simulation is probably very similar to gravity/galaxy simulations (but on a smaller distance scale and with a reversed force vector). If it's something more comlicated, then would it be possible for you to describe the problem briefly?
14. ## Can divide-and-conquer be used for anything else besides Quick-sort and the Merge-sor

I know it's not really "real-world" use, but the divide-and-conquer paradigm was needed to solve the problem Points in this year's IOI (International Olympiad in Informatics). Coincidentally (or not? :) it was the hardest problem of the competition, and only 1 contestant was able to fully solve it. The problem statement is here, and the solution here. Cheers!
15. ## color gradient from beizer curve

In your case I'd just evaluate the curve at some points, and then simply construct an approximation of the curve by "drawing" straight lines between those points. Once you've got that, just order the segments by the value of x (or create them pre-ordered) and you've got what you were looking for. :) If you're worried about the accuracy of this approach you can employ some kind of adaptive subdivision scheme. A shameless plug: Adaptive subdivision of Bezier curves. This might possibly be even more accurate than what you would get by solving the problem analytically (floating point errors near the edges probably wouldn't be very nice). P.S. The approach I describe on the link above could probably be further optimized to better suit your restrictions, though. For example, you don't have to continue subdivision if the endpoints are closer than 1 "pixel" along the x-axis, assuming the curve is generally well-behaved...