Jump to content

  • Log In with Google      Sign In   
  • Create Account

robotichrist

Member Since 23 Apr 2006
Offline Last Active Sep 07 2011 04:54 PM

#4818797 Voxel / Marching Cubes

Posted by robotichrist on 02 June 2011 - 12:49 PM

Here is a really dead simple/bone headed algorithm that I've used to do level-set surface reconstruction. Unlike marching cubes/tets/whatever it has no special cases, though the trade off is that it can end up being a bit slower. This is how it works:

1. First, generate an initial coarse mesh that covers the level set. This can be as simple as just covering it with uniform boxes and then gluing them together along coplanar faces, throwing out the ones which do not cross the boundary.

2. Next, take your coarse mesh and perturb the vertices so that they lie on the level set boundary. To do this, you just use a non-linear root finding method, such as Newton's method/gradient descent.

3. Take the resulting mesh that you got from step 2 and apply a surface subdivision algorithm.

4. Go to step 2 and repeat until your mesh is reasonably fine.

Of course where this can get into trouble is if your initial guess/mesh was too coarse (ie you cover up a hole or something). So, there are probably some ways to refine this to make it a bit more efficient and robust. However, the general idea is to just solve the problem like you would any ordinary type of non-linear root finding problem; which is to just do gradient descent and iterate until it converges.


EDIT: -2 ? Really? I never claimed this method was particular good or even that original ( I am sure that someone has tried doing this before... ), but it is simple and it works. Could someone please explain what their issue is with this post? I would be happy to stand corrected if there is something wrong with what I wrote.


#4815783 functional programming languages

Posted by robotichrist on 25 May 2011 - 03:40 PM

I don't want to start a language war, but even though I really do like functional programming it is hard to make it work in any application that needs to push around big blobs of semi-structured binary data (such as games, numerical code, or general graphics/physics/image processing types of stuff). The reason it ends up being so difficult is that you end up implicitly needing to make lots of copies of your data to get stuff to work without side effects. When the things that you are copying are small and don't persist for long, then this is no big deal. However with big vectors and matrices this is definitely not true, and it becomes a very expensive proposition.

Even if you are careful and use things like monads to abstract some of this away, it is still very easy to make a mistake and accidentally put a giant memcpy right in the middle of your main loop (and some languages will do this silently, even different behaviors depending on what compiler options you use!) Some languages (like Erlang) try to get around this by replacing random access arrays with binary trees. This has the advantage that local updates can be made without doing a complete copy, but the disadvantage is that random accesses are much slower, and the data structures tend to exhibit poor locality/cache performance and are typically several orders of magnitude slower.


This is really a shame, since I like the idea of using functional programming for things like organizing game logic and entity behavior, and I think that functional code is usually more elegant and readable than imperative code. However, the awkwardness of using FP for graphics is just too big a downside to make it practical. One way I could see it working out is if you use a functional backend for say a server (such as in a MMO-browser based game). In this situation, the game logic could be sufficiently abstracted away from your display that it might be feasible to use something like Erlang or Haskell. Of course if it turns out that later on you do need to do physics on the server-side, then you will probably end up getting screwed.


#4814806 Physics Estimation Question

Posted by robotichrist on 23 May 2011 - 05:30 PM

It is an interesting idea, and I think something that there is not a lot of good literature on this subject yet. There are a few things you can fake using these types of methods, but it is more an art than a science. For example, if you want stuff to look like you have rigid body dynamics/inertia working correctly, it is often enough just to impart a constant angular velocity to a rigid body and then let it spin freely (even though this may not be the correct motion as described by the Euler equations/Jacobi theta function). Collisions and interactions are generally harder to fake, but you may be interested in reading the following paper:

SW Hsu, J Keyser. "Statistical simulation of rigid bodies" Eurographics/SIGGRAPH Symposium on Computer Animation (2009). PDF

Another neat idea that I've seen used successfully is the concept of curl noise for fluid/flow field simulation. There was a neat paper on it at SIGGRAPH 2007 which describes the basics of the technique:

R Bridson, J Hourihan, M Nordenstam. "Curl noise for procedural fluid flow", SGGRAPH (2007). PDF

A slight refinement of this method is divergence free noise, which achieves similar looking results:

I DeWolf, "Divergence free noise" Technical Report PDF

In all these methods the general idea seems to be that you look for some invariant or statistical properties you want the physics to satisfy (which is not strict), then instead of trying to solve it exactly you just pick a random solution that satisfies these constraints. Of course this is probably not going to work too well with rigid body dynamics, since the entire work of solving a system is wrapped up in the constraints; but it may work better for doing things like faking fractures, particles or complex turbulent phenomena which are chaotically related to their initial conditions. And of course none of these things do exactly what you are asking, but maybe they can help inspire you to find a workable solution.




#4812160 Computer science or game programming?

Posted by robotichrist on 17 May 2011 - 05:35 PM

... especially in a field such as CS, which is constantly evolving.



Sorry, I don't mean to pick on you, but I would like to take a moment to point out that while I agree with your overall conclusion, I strongly disagree with you on this point. The fundamentals of CS are not really evolving much today. Basic algorithms, data structures, analysis and computer architecture have not changed in the past 30 years, nor does it seem likely that they are going to go through any radical updates in the forseeable future. Of course the cutting edge of CS is rapidly advancing, but this is not the sort of stuff that you will learn in a typical undergrad CS curriculum, nor is it even that important (no offense to those involved in research) to 99% of the general programming population out there. This is exactly why a CS degree is so valuable; because these foundational concepts are not likely to change and are very useful in a wide array of situations.



#4798547 nearest neighbour 1-bit bitmap

Posted by robotichrist on 14 April 2011 - 02:15 PM

I don't know if you are still working on this problem, but you may want to take a look at this paper:

Gionis, Indyk and Motwani, "Similarity Search in Higher Dimensions via Hashing", Proc. of VLDB (1999) PDF

As far as I know, it is the current best solution to the hamming distance nearest neighbor problem.


#4796017 How good are the best voxel compression algorithms?

Posted by robotichrist on 08 April 2011 - 10:16 AM

Ive done a bit of work here, and basicly I left my voxel project because just like you say I couldnt figure out how to compress it enough.
I got it down to about a byte and a half positional data (with a surface representation and a runlength encosion of axis aligned blocks) with a basic fractal mountainside, but thats well not enough compression.
My world would take up hundreds of gigs of disk space, so its not viable.
I think you need to know how png compression works, because you compress voxels in a similar way, apparently Jon Olick says he can get it down to a bit and a half positional data!!! (thats more like it) but the internals of a png file are a bit beyond me so I decided to switch over to another common geometry compression algorythm, displacement mapping, then png compression is easy because all you have to do is save the graphic and presto its compressed nicely for you... I just am having trouble developing a decent displacement map at decent enough speed, but thats another story.
I am fully into this "unique geometry and world" idea though, my plan is to bake it all with offline global illumination, but im not even close to getting that far yet. :)




PNG is still basically run-length encoding. The only difference is that the runs can be preprocessed using a handful of predefined filters to make them more efficiently compressible. For example, one of the preprocessing options is to take difference series of each pixel value before doing the run-length encoding; as a result this lets you run-length encode horizontal gradients very efficiently. Other filters are more sophisticated, like the Paeth filter[1], or various types of median filters, but the overall idea is basically the same. As a result, PNG tends to work well with flat images, linear gradients, dithering or other types of simple repeating patterns. It tends to work very badly for images which have a lot of higher order regions, like cubic or greater surface patches. PNG type compression *might* be a good idea for a voxel engine, but I am still not convinced that it can beat the B-rep barrier. The reason is that at the end of the day, you still have to store the start/stop of each run, which means that you are basically storing the boundary of the object anyway. Other compression formats like JPEG could conceivably do better than this by being able to make lossy compression trade offs, but I am not sure what the overall cost would be at the end; or if it would even look `good enough' for a real game engine.

[1]: Paeth. "Image File Compression Made Easy", Graphics Gems II


#4795528 How good are the best voxel compression algorithms?

Posted by robotichrist on 07 April 2011 - 08:48 AM

This is basically an informal sort of poll, but it is based on a comment that I made in the unlimited detail thread. The main motivation for asking this question should be self evident; since the ultimate bottleneck in any voxel based system is memory requirements. Better compression means higher detail. larger environments, and ultimately better performance (due to fewer cache misses and less data to move around). Also for distributed voxel engines, memory is the key limiting factor in the amount of bandwidth used.

This suggests that the biggest, and most important problem that any new voxel engine needs to solve is *not* how to max out rendering performance, but rather to figure out how they are going to efficiently compress all that volumetric data. In fact, somewhat counter intuitively I would suggest that highly compressed voxel data may even be *faster* to render than uncompressed data. This was certainly true back in the old days of sprite based rendering, where using run-length encoded sprites would allow you to blit them to the screen much more quickly than doing a brute force bit-by-bit memcpy. The reason for this is that modern CPUs and GPUs are primarily bandwidth limited, and so the trade-off between memory vs. computation is becoming increasingly lopsided in favor of the latter.

One common technique for encoding voxel data is to use either run-length encoding or octrees. The underlying assumption in each case is that the voxel data is mostly piecewise constant, and so you only need to keep track of the places where it transitions between these different regions. The main reason that this works is that the boundary for the object is much smaller than the actual object (typically on the order of O(n^{(d-1)/d}) in d-dimensions). However, if you take this idea to the ultimate conclusion, then the best that you can reasonably hope for is to compress the voxel data down to just the information encoded on the boundary; or in other words reduce it to a B-rep. This is slightly annoying, since it seems to just take you back to polygon/B-rep objects again :(

So, I am asking this question to try to figure out if it is actually possible to do much better than this? There are a bunch of `crank' videos on youtube with people claiming to do so (ie Unlimited Detail and Atomengine), but there are many reasons to be skeptical about their claims (such as a lack of independent verification, and questionable motivations on their part to attract investors). So, are there any credible research results which really try to honestly investigate this problem? The work by Sven Forstmann (spacerat on these forums) comes to mind, but I don't think he has written anything lately. There is also sparse voxel octrees, but I am not convinced that using a simple octree compression scheme will get you anything better than what can already be done with B-reps (see the above comment).

What are your thoughts on this issue?


#4790001 Friction impulse. Need formula.

Posted by robotichrist on 24 March 2011 - 10:28 AM

Hello. For normal impulse I used Chris Heckers papers. All works fine, but formula don't take friction into account. I believe that there's equivalent for friction (not just tangent velocity * cof) but I couldn't google it.


Friction is complicated. In the naive sense, rigid body dynamics with a Coulombic friction force (which is what you are describing) is not even solvable. The classic and ancient example of this is Painleve's paradox, which has remained unresolved until relatively recently.

As a result, if you want to do friction using the classical Coulomb approximation, you are in for a very rude awakening (as I am sure you have just discovered!) A more well posed way to formulate friction is to use the principle of maximum dissipation. This basically says that the friction force will try to reduce the velocity (ie kinetic energy) as much as possible in the tangent direction to the collision impulse force, subject to some constraints on the magnitude of the velocity. This constraint forms a "cone" shape, and so it is commonly called a "friction cone". To solve this constraint, many LCP solvers use a piecewise linear approximation of the friction cone to ensure the constraint is not violated. For a more precise and careful discussion of these issues, there are some really great papers by David E. Stewart. Here is a good one to start with (in my opinion):


David E. Stewart, (2000) "Rigid Body Dynamics with Friction and Impact", SIAM Review PDF


#4782915 Fourier Collision Detection

Posted by robotichrist on 07 March 2011 - 12:44 PM

Hello everyone,

I know that collision detection is a frequent topic of discussion here, so I figured that this might be a good forum to present an early version of some research that I have been working on and hopefully get some early feedback. The basic idea here is to rewrite the collision test as a convolution, then use a low pass filter to discretize the geometry of the solids. These convolutions can then be evaluated and differentiated exactly using a finite Fourier summation, which leads to a new collision test / Jacobian calculation that scales with the accuracy of the solids (ie the number of coefficients is proportional to the running time and to the accuracy of the test). I believe that this technique has the potential to be much simpler than more conventional contact/geometry based collision detection schemes and could lead to a new class of solvers for approximate collision detection/rigid body dynamics. Here is a link to the early draft of the paper:


http://sal-cnc.me.wisc.edu/index.php?option=com_remository&Itemid=143&func=fileinfo&id=191


This is still an early version, but I'd definitely appreciate any and all comments. :) I am also curious to know what those who have more experience with developing rigid body dynamics engines may think about this approach, and what their views are on the general problem.


PARTNERS