# ury

Member

224

476 Neutral

• Rank
Member
1. ## Rotation equality

Dave's method can be improved a little bit if we recall that for any given rotation matrix, the following holds: (*) trace(R) = 1+2cos(a), where a is the angle of rotation. And so, given our two matrices, R1 and R2, define: R = transpose(R1)*R2. The two matrices are close enough iff |3 - trace(R)| < 2eps. Note, that you only need the diagonal entries of R to find the trace. By the way, (*) can be proven by using Rodrigue's formula.
2. ## Color Contrast

A search in google gave me this. As for the formula, I simply made it up. The idea is to use something like the Lagrange Interpolating Polynomial.
3. ## Color Contrast

Quote:Original post by wolf float3 Cont = (color - 0.5f) * Contrast + 0.5f; Because the value range is restricted to the area of 0..1 at this stage of the pipeline, what this code snippet does is make dark colors brighter and bright colors darker. In other words it decreases the contrast. Moving it to any other place in the pipeline where the value range is bigger wouldn't help either, because it can not really increase the values of colors. I find this slightly unsatisfying. Sounds like the value of Contrast is less than 1. Try using values bigger than 1. Please note that doing so, you are loosing detail because color values can be mapped outside [0,1]. A better alternative is to borrow from the known "S"-curve technique which is commonly used in Photoshop's "Curves". Let's define: f(t) = t - a(t-1)t(t-1/2), where a is used to adjust the contrast. The higher the a, the stronger the adjustment. Of course the value of 0 means no adjustment at all. Suggested values for a are usally in the [0,2] range. For small enough values of a, f maps [0,1] to [0,1] so the loss of detail is minimal. Although f(t) is only a cubic polynomial, its computation can be expensive. If possible, consider using a 1D texture to post-process the color output of your pipeline. [Edited by - ury on November 27, 2006 5:13:28 PM]
4. ## DCT: Dct-values fit into my data type

Assuming the "standard" DCT-II 8x8 transform, the transform should map values in [0,255] onto [-2047,2048]. This means that before the quantization, the output values require 12-bits of precision. Dividing the output values by 24=16, should squeeze it into [-127,128] range. There's more to the compression process than just performing the transformation. Usually, you should perform the following steps to compress the image: 1. Perform the transformation. 2. Divide the output by some quantization matrix. A different matrix can be used for each channel. 3. Compress the result by using RLE compression. 4. Further compress the result by using Huffman Encoding. Note: Since the data is stored in a matrix, in step 3, we scan it using a ZigZag order starting from index (0,0). The idea behind this compression is that steps 1 and 2 will result in a matrix with many repeating zeros and similar small values.
5. ## About HSL to RGB with HDR values

Have you considered working with HSB/HSV color space? It should be much more intuitive both to you and your artists (since this is the color space used in photoshop). The conversion formulas provided by fboivin, should work for any positive RGB values. Please note that H and S will still be mapped into the [0..1] range, while the B (brightness) value will be mapped into [0..inf], which is pretty much what you are looking for.
6. ## curvature variation for cubics

First of all, K(t)2 = (x'(t)y"(t) - x"(t)y'(t))2 / (x'(t)2 + y'(t)2)3. And yes, you should divide by n. Just, read about "composite trapezoidal rule" in here.
7. ## Decomposing rotations

Quote:Original post by DonDickieD But what would you suggest for finding the integrals over omega? You could use a matrix logarithm to extract it from the orientation matrix. In order to do that, you have to diagonolize your orientation matrix which isn't a "pretty" process :) But it could work...
8. ## Decomposing rotations

Quote:Original post by DonDickieD I don't think this is correct either. What you write would imply the following: C1 = u1 * v2 = cos(alpha) = asin( u1 * v2 ) No, no, no. :) That's not the idea at all. Just like you said, the original constraint behaves poorly in extreme situations. This is probably due to the fact that the constraint normals become much smaller in such situations. asin fixes this problem since the normals are always unit. Since asin(x) ~ x near 0, the constraint should act the same as the old version in "good" configurations. Quote: Anyway, I don't want to reformulate the constraint since it works. It is simply a little softer in extreme conditions, e.g. long chains. So my question still is how do I find the stabilization term (Baumgarte) for a velocity constraint of the following form: n = R1 * n_local1; dC/dt = (omega1 - omega2) * n Can I compute e.g. the relative rotation quaternion dq and project the quaternion axis onto n. Something like this using |dq.axis| = sin(alpha/2) ~ alpha/2 for small angles: C = 0.1 * ( 2 * dq.axis * n ) / dt Do you see any problems here? Because dq.axis isn't really an integral of the omega.
9. ## Decomposing rotations

You are right ofcourse, it was a typo. What I ment was asin not acos. C1 = asin(u1.v2) -> dC1/dt = (1 / Sqrt(1 - (u1.v2)2)) * (u1 x v2) * (omega1 - omega2) C2 = asin(u1.w2) -> dC2/dt = (1 / Sqrt(1 - (u1.w2)2)) * (u1 x w2) * (omega1 - omega2)
10. ## Decomposing rotations

What do you mean by softer? Basically, the two methods are almost identical, the only difference is that in the second method, the "constraint normals" are unit. The same can be easily done for the first method as well. All we need is to normalize u1 x v2 and u1 x w2. Formally, it can be done as follows: C1 = acos(u1.v2) -> dC1/dt = (-1 / Sqrt(1 - (u1.v2)2)) * (u1 x v2) * (omega1 - omega2) C2 = acos(u1.w2) -> dC2/dt = (-1 / Sqrt(1 - (u1.w2)2)) * (u1 x w2) * (omega1 - omega2) Clearly, the normals are unit now.
11. ## Cholesky algorithm

Cholesky decomposition only works with positive-definite matrices. Another way to put it, unless your matrix has strictly positive eigenvalues, just like Wasting Time said, you'll sqrt zero or even negative values.
12. ## Decomposing rotations

Dirk, what are you trying to do? Does it have anything to do with rotational constraints?
13. ## fast matrix inverse

In your original post, A is the symmetric matrix. To avoid any further confusion, can you tell me more about your problem and the kind of matrices that you have. Woodbury's formula is useful in the following case. Let's say that you have a matrix A and you already know its inverse matrix. Now, if you want to update A by adding another matrix B, such that B has a small rank, the inverse of A+B can be computed by the Woodbury's formula. After reading your original post once again, I noticed that both your A and B are invertible. This means that they have a full rank. If this is really the case, Woodbury's formula is useless.
14. ## fast matrix inverse

You might find Woodbury's formula useful. Please note that it'll only help you if A = UUT, where U is a NxM matrix with M << N.
15. ## Natural logarithm

Even better estimates can be achieved using Pade approximation. Here's an example: Define: P(x) = -824 - 9024*x - 17880*x2 + 17880*x4 + 9024*x5 + 824*x6 Q(x) = 189 + 4194*x + 18963*x2 + 30108*x3 + 18963*x4 + 4194*x5 + 189*x6 For any x>0: Log[x] ~= P(x) / Q(x) This approximation gives its best results near x=1. If you want to sacrifice this in order to get a better approximation for bigger x, you can perturb this function using Chebyshev polynomial and get something like: P(x) = -3361863335 - 41216492700*x - 84220506435*x2 + 84220506435*x4 + 41216492700*x5 + 3361863335*x6 Q(x) = 738676107 + 18554747202*x + 87388990245*x2 + 140109217500*x3 + 87388990245*x4 + 18554747202*x5 + 738676107*x6 Where for each x>0: Log[x] ~= P(x) / Q(x)