# GameCat

Member

312

292 Neutral

• Rank
Member
1. ## Clipping plane misconception?

Because of the way depth is computed, most of the precision in the depth buffer is spent close to the near plane. This makes sense since it means you get more precision for things that are close than things that are far away. It also means that pushing the near plane out generally gives you much more precision than bringing the far plane in. So it's the ratio that's important but you get more bang for your buck by pushing the near plane out compared to bringing the far plane in.

I don't know where to find source but the easy hackish way is to transform the data, do linear regression and transform back. Assuming you know how to do linear regression of course.
3. ## question checknig if a set of vectors span a vector space(REFRASED)

Short answer: no. Long answer: A linearly dependent set of n vectors can not span Rn. If you have a linearly dependent set of vectors u1, u2,... un that span a space Un then you could just remove the vectors that can be expressed as a linear combination of the others until you have a linearly independent subset u1, u2,... uk (k<n) which stills spans Un. Problem is, you need at least n vectors to span Rn, so you're screwed. Edit: Never saw your post right above mine, I was answering your original question, sorry if you got confused. [Edited by - GameCat on July 18, 2005 6:08:03 PM]
4. ## What part of the pipeline does antialiasing slow down?

Multi sample anti aliasing does not increase the number of fragments (if you use the term "fragment" in the way the OpenGL spec uses it) but the number of *samples*. The two are not identical. The fragment shader is run once per fragment generating a single colour value and multiple samples containg depth and stencil are output (unless the shader outputs depth then all samples have the same dpeth values). So multisample AA requires worst case storage cost for as many colour and depth/stencil values as there are samples per pixel, but a single fragment generates only *one* colour value. This sample buffer is then resolved before display into a single colour value per pixel. Multi sample AA generally stresses memory bandwidth and fillrate if you use lots of samples. Most modern graphics chip sets can generate two samples per clock for free, so e.g. 4 samples cost you an extra clock. Memory bandwidth also increases with the number of samples but not as much as you might think due to compression. Most of the area of a given triangle has identical depth/stencil and colour values, most of that is required at the edges, so you can save bandwidth. Of course, the smaller the triangles are, the less true this becomes.
5. ## Difference between KD-tree and a AABB BV tree

Provided you world is finite in size, the AABB-tree is binary and the bounding volumes never overlap then you have a kd-tree. Excepth with larger memory footprint. But a AAB-tree is not necesarily binary and doesn't need to partition space (the union of two children can be smaller than the parent in general, although that might not be the case in your tree), so not all AABB-trees are equivalent to kd-trees. You'll probably get better results by using the surface area heuristic when building the tree btw. Esentially you choose a split point such that num_of_tris_in_child*child_surface_area is minimised. Google for more info.
6. ## ghostview vs. acrobat reader

Adobe reader is really crappy at displaying bitmap fonts. The standard for pdf:s produced from LaTex source is usually to include bitmapped fonts since they can be rendered directly into this format internally. You can include truetype versions of the fonts (or use a font most people are likely to have installed, e.g. Times) by using certain switches when generating pdfs in newer versions of LaTex, but older papers on the web often use bitampped fonts which gives the crappy quality.
7. ## Yo Mama.

Your mother, rotund. Like a beacon, she draws them. So many "uncles".
8. ## Z-buffer - why not pass the z value separately

The reason w-buffering isn't supported on modern cards is that if z is linear in screen space, you can make lots of optimisations. It enables efficient compression of z which is very important for efficient multi sampling anti aliasing for example. It also saves per sample transistors since you save a divide that would otherwise be required. And you want to be able to output lots of z-stencil samples to get good framerates with AA and/or stencil shadows and/or high res shadow maps. There's a more extensive discussion on this at opengl.org under the "Suggestions for OpenGL 2.?", if you search for "z-buffer formula" you'll probably find it.
9. ## redacted

First of all, parallelising a problem is REALLY HARD in the general case. And when you do succeed you tend to require lots of communication. Look up Amdahl's law. And I hate to be blunt but you don't know jack shit about processor design. Read the book "Computer architechture: a quantitative approach" by Hennessy, Goldberg and Patterson ( http://www.amazon.com/exec/obidos/ASIN/1558603298/102-9711485-7224123 ) You'll learn LOTS, I promise.
10. ## Reversing a rotation matrix

Rotation matrices aren't "presumably" orthogonal, all rotation matrices are orthogonal and a non-orthogonal 3x3 matrix doesn't represent a (pure) rotation. But this just reinforces your point so I don't really know why I posted it. Nitpicky by nature...
11. ## Reflection off of arbitrary normal

. <==> vector dot product * <==> vector-scalar or scalar-scalar multiplication
12. ## Transforming one matrix to another gradually

Heres a quick and easy way: First you need the rotation matrix R which rotates from your camera's original orientation O1 to the desired one O2. O2=RO1 <=> O2(O1^-1)=R <=> [orthogonality] O2O1^t=R Alright now find the axis of rotation for R. The axis of rotation is a vector that is unchanged when transformed by R, Ra=a. Therefore, find the eigenvector of R associated with the eigenvalue 1 which amounts to solving the equation: (R-I)v=nullvec where I is the identity matrix. This should be fairly easy if you're familiar with linear algebra (solve det(R-I)=0). You can find the angle of rotation by a simple dot product and then just interpolate the angle linearly and create an orientation from that. The other (better) alternative is to use quaternions but if you're not familliar with them that will take more time. It's worth it in the long run though. Sorry this got cut a litlle short cause I had to leave, hopefully you'll get something useful out of it anyway.
13. ## 2D Rigid Body Physics and The Shaking

Assuming your using a collision method that gives you penetration depth, like e.g. the separating axis test, then translation is the only thing you need to prevent interpenetration. In fact, rotating as well won't work since the objects are only guaranteed not to interpenetrate when you do pure translation along the collision normal. The collision impulse you apply in the event of a collision could of course cause a rotation but since you seem to have an inertia matrix I assume that works. Try just taking smaller time steps, if the problem gets less visible or disappearss, the problem is your ODE-solver/integrator. If it doesn't, something else is wrong. Try at least 3-400 physics ticks per second before giving up.
14. ## For such a good IDE, Visual Studio.NET can be really annoying

I often hear people say that they can't develop on naything but MSVC/Windows because anything else means crappy IDE and this always confuses me since the MSVC IDE sucks giant hairy balls. I mean, maybe all the other IDEs suck too, but the MSVC editor doesn't even get basic stuff like EDITING TEXT right. * Changing the default syntax highlighting is clunky, and the default settings omits stuff like highlighting string literals, so it's easy to miss if you've failed to match quotes. * The auto indentation sucks and continually gives bogus results if you use a K&R brace style. * Autocomplete and CodeSense works sometimes, but then sometimes it doesn't. I her Visual Assist helps with this, but that's a plug-in so it doesn't count. * There's no refactoring support at all. Not even really simplistic stuff like say renaming a method in a class. Granted, not many C++ IDEs support this since C++ is such a bitch to parse, but still. * Parts of the interface are completely impossible to understand without access to a manual. I don't know how long I've spent trying to remeber how to set the correct "context" for a data breakpoint while debugging. Why can't you just click on a variable or select from a list of possible ones in the context current to the cursor? * The debugger interface is borked in .NET since it for some unfathomable reason keeps moving the dockable-draggable watch windows around or hiding them completely when I don't want it too. Also, viewing STL containers in the debugger is a PITA. Why can't you just click on a variable in the source and have it displayed in a watch window? Would greatly ease steping through stuff when debugging. With all that said, the debugger interface is pretty good, Edit and continue rawks and the compiler is good so I forgive MSVC for it's shortcomings. But it's still awful. Just like all OS' are awful and all marketing people are evil. Some are just less bad than others.
15. ## 2D Rigid Body Physics and The Shaking

Obviously you need an inertia tensor to get correct movement for your objects, but that has no effect on the shaking problem mentioned here.