 Home
 » Viewing Profile: Reputation: robotichrist
robotichrist
Member Since 23 Apr 2006Offline Last Active Sep 07 2011 04:54 PM
Community Stats
 Group Members
 Active Posts 73
 Profile Views 1,806
 Submitted Links 0
 Member Title Member
 Age Age Unknown
 Birthday Birthday Unknown

Gender
Not Telling
#4818797 Voxel / Marching Cubes
Posted by robotichrist on 02 June 2011  12:49 PM
1. First, generate an initial coarse mesh that covers the level set. This can be as simple as just covering it with uniform boxes and then gluing them together along coplanar faces, throwing out the ones which do not cross the boundary.
2. Next, take your coarse mesh and perturb the vertices so that they lie on the level set boundary. To do this, you just use a nonlinear root finding method, such as Newton's method/gradient descent.
3. Take the resulting mesh that you got from step 2 and apply a surface subdivision algorithm.
4. Go to step 2 and repeat until your mesh is reasonably fine.
Of course where this can get into trouble is if your initial guess/mesh was too coarse (ie you cover up a hole or something). So, there are probably some ways to refine this to make it a bit more efficient and robust. However, the general idea is to just solve the problem like you would any ordinary type of nonlinear root finding problem; which is to just do gradient descent and iterate until it converges.
EDIT: 2 ? Really? I never claimed this method was particular good or even that original ( I am sure that someone has tried doing this before... ), but it is simple and it works. Could someone please explain what their issue is with this post? I would be happy to stand corrected if there is something wrong with what I wrote.
#4815783 functional programming languages
Posted by robotichrist on 25 May 2011  03:40 PM
Even if you are careful and use things like monads to abstract some of this away, it is still very easy to make a mistake and accidentally put a giant memcpy right in the middle of your main loop (and some languages will do this silently, even different behaviors depending on what compiler options you use!) Some languages (like Erlang) try to get around this by replacing random access arrays with binary trees. This has the advantage that local updates can be made without doing a complete copy, but the disadvantage is that random accesses are much slower, and the data structures tend to exhibit poor locality/cache performance and are typically several orders of magnitude slower.
This is really a shame, since I like the idea of using functional programming for things like organizing game logic and entity behavior, and I think that functional code is usually more elegant and readable than imperative code. However, the awkwardness of using FP for graphics is just too big a downside to make it practical. One way I could see it working out is if you use a functional backend for say a server (such as in a MMObrowser based game). In this situation, the game logic could be sufficiently abstracted away from your display that it might be feasible to use something like Erlang or Haskell. Of course if it turns out that later on you do need to do physics on the serverside, then you will probably end up getting screwed.
#4814806 Physics Estimation Question
Posted by robotichrist on 23 May 2011  05:30 PM
SW Hsu, J Keyser. "Statistical simulation of rigid bodies" Eurographics/SIGGRAPH Symposium on Computer Animation (2009). PDF
Another neat idea that I've seen used successfully is the concept of curl noise for fluid/flow field simulation. There was a neat paper on it at SIGGRAPH 2007 which describes the basics of the technique:
R Bridson, J Hourihan, M Nordenstam. "Curl noise for procedural fluid flow", SGGRAPH (2007). PDF
A slight refinement of this method is divergence free noise, which achieves similar looking results:
I DeWolf, "Divergence free noise" Technical Report PDF
In all these methods the general idea seems to be that you look for some invariant or statistical properties you want the physics to satisfy (which is not strict), then instead of trying to solve it exactly you just pick a random solution that satisfies these constraints. Of course this is probably not going to work too well with rigid body dynamics, since the entire work of solving a system is wrapped up in the constraints; but it may work better for doing things like faking fractures, particles or complex turbulent phenomena which are chaotically related to their initial conditions. And of course none of these things do exactly what you are asking, but maybe they can help inspire you to find a workable solution.
#4812160 Computer science or game programming?
Posted by robotichrist on 17 May 2011  05:35 PM
... especially in a field such as CS, which is constantly evolving.
Sorry, I don't mean to pick on you, but I would like to take a moment to point out that while I agree with your overall conclusion, I strongly disagree with you on this point. The fundamentals of CS are not really evolving much today. Basic algorithms, data structures, analysis and computer architecture have not changed in the past 30 years, nor does it seem likely that they are going to go through any radical updates in the forseeable future. Of course the cutting edge of CS is rapidly advancing, but this is not the sort of stuff that you will learn in a typical undergrad CS curriculum, nor is it even that important (no offense to those involved in research) to 99% of the general programming population out there. This is exactly why a CS degree is so valuable; because these foundational concepts are not likely to change and are very useful in a wide array of situations.
#4798547 nearest neighbour 1bit bitmap
Posted by robotichrist on 14 April 2011  02:15 PM
Gionis, Indyk and Motwani, "Similarity Search in Higher Dimensions via Hashing", Proc. of VLDB (1999) PDF
As far as I know, it is the current best solution to the hamming distance nearest neighbor problem.
#4796017 How good are the best voxel compression algorithms?
Posted by robotichrist on 08 April 2011  10:16 AM
Ive done a bit of work here, and basicly I left my voxel project because just like you say I couldnt figure out how to compress it enough.
I got it down to about a byte and a half positional data (with a surface representation and a runlength encosion of axis aligned blocks) with a basic fractal mountainside, but thats well not enough compression.
My world would take up hundreds of gigs of disk space, so its not viable.
I think you need to know how png compression works, because you compress voxels in a similar way, apparently Jon Olick says he can get it down to a bit and a half positional data!!! (thats more like it) but the internals of a png file are a bit beyond me so I decided to switch over to another common geometry compression algorythm, displacement mapping, then png compression is easy because all you have to do is save the graphic and presto its compressed nicely for you... I just am having trouble developing a decent displacement map at decent enough speed, but thats another story.
I am fully into this "unique geometry and world" idea though, my plan is to bake it all with offline global illumination, but im not even close to getting that far yet.
PNG is still basically runlength encoding. The only difference is that the runs can be preprocessed using a handful of predefined filters to make them more efficiently compressible. For example, one of the preprocessing options is to take difference series of each pixel value before doing the runlength encoding; as a result this lets you runlength encode horizontal gradients very efficiently. Other filters are more sophisticated, like the Paeth filter[1], or various types of median filters, but the overall idea is basically the same. As a result, PNG tends to work well with flat images, linear gradients, dithering or other types of simple repeating patterns. It tends to work very badly for images which have a lot of higher order regions, like cubic or greater surface patches. PNG type compression *might* be a good idea for a voxel engine, but I am still not convinced that it can beat the Brep barrier. The reason is that at the end of the day, you still have to store the start/stop of each run, which means that you are basically storing the boundary of the object anyway. Other compression formats like JPEG could conceivably do better than this by being able to make lossy compression trade offs, but I am not sure what the overall cost would be at the end; or if it would even look `good enough' for a real game engine.
[1]: Paeth. "Image File Compression Made Easy", Graphics Gems II
#4795528 How good are the best voxel compression algorithms?
Posted by robotichrist on 07 April 2011  08:48 AM
This suggests that the biggest, and most important problem that any new voxel engine needs to solve is *not* how to max out rendering performance, but rather to figure out how they are going to efficiently compress all that volumetric data. In fact, somewhat counter intuitively I would suggest that highly compressed voxel data may even be *faster* to render than uncompressed data. This was certainly true back in the old days of sprite based rendering, where using runlength encoded sprites would allow you to blit them to the screen much more quickly than doing a brute force bitbybit memcpy. The reason for this is that modern CPUs and GPUs are primarily bandwidth limited, and so the tradeoff between memory vs. computation is becoming increasingly lopsided in favor of the latter.
One common technique for encoding voxel data is to use either runlength encoding or octrees. The underlying assumption in each case is that the voxel data is mostly piecewise constant, and so you only need to keep track of the places where it transitions between these different regions. The main reason that this works is that the boundary for the object is much smaller than the actual object (typically on the order of O(n^{(d1)/d}) in ddimensions). However, if you take this idea to the ultimate conclusion, then the best that you can reasonably hope for is to compress the voxel data down to just the information encoded on the boundary; or in other words reduce it to a Brep. This is slightly annoying, since it seems to just take you back to polygon/Brep objects again
So, I am asking this question to try to figure out if it is actually possible to do much better than this? There are a bunch of `crank' videos on youtube with people claiming to do so (ie Unlimited Detail and Atomengine), but there are many reasons to be skeptical about their claims (such as a lack of independent verification, and questionable motivations on their part to attract investors). So, are there any credible research results which really try to honestly investigate this problem? The work by Sven Forstmann (spacerat on these forums) comes to mind, but I don't think he has written anything lately. There is also sparse voxel octrees, but I am not convinced that using a simple octree compression scheme will get you anything better than what can already be done with Breps (see the above comment).
What are your thoughts on this issue?
#4790001 Friction impulse. Need formula.
Posted by robotichrist on 24 March 2011  10:28 AM
Hello. For normal impulse I used Chris Heckers papers. All works fine, but formula don't take friction into account. I believe that there's equivalent for friction (not just tangent velocity * cof) but I couldn't google it.
Friction is complicated. In the naive sense, rigid body dynamics with a Coulombic friction force (which is what you are describing) is not even solvable. The classic and ancient example of this is Painleve's paradox, which has remained unresolved until relatively recently.
As a result, if you want to do friction using the classical Coulomb approximation, you are in for a very rude awakening (as I am sure you have just discovered!) A more well posed way to formulate friction is to use the principle of maximum dissipation. This basically says that the friction force will try to reduce the velocity (ie kinetic energy) as much as possible in the tangent direction to the collision impulse force, subject to some constraints on the magnitude of the velocity. This constraint forms a "cone" shape, and so it is commonly called a "friction cone". To solve this constraint, many LCP solvers use a piecewise linear approximation of the friction cone to ensure the constraint is not violated. For a more precise and careful discussion of these issues, there are some really great papers by David E. Stewart. Here is a good one to start with (in my opinion):
David E. Stewart, (2000) "Rigid Body Dynamics with Friction and Impact", SIAM Review PDF
#4782915 Fourier Collision Detection
Posted by robotichrist on 07 March 2011  12:44 PM
I know that collision detection is a frequent topic of discussion here, so I figured that this might be a good forum to present an early version of some research that I have been working on and hopefully get some early feedback. The basic idea here is to rewrite the collision test as a convolution, then use a low pass filter to discretize the geometry of the solids. These convolutions can then be evaluated and differentiated exactly using a finite Fourier summation, which leads to a new collision test / Jacobian calculation that scales with the accuracy of the solids (ie the number of coefficients is proportional to the running time and to the accuracy of the test). I believe that this technique has the potential to be much simpler than more conventional contact/geometry based collision detection schemes and could lead to a new class of solvers for approximate collision detection/rigid body dynamics. Here is a link to the early draft of the paper:
http://salcnc.me.wisc.edu/index.php?option=com_remository&Itemid=143&func=fileinfo&id=191
This is still an early version, but I'd definitely appreciate any and all comments. I am also curious to know what those who have more experience with developing rigid body dynamics engines may think about this approach, and what their views are on the general problem.