Jump to content

  • Log In with Google      Sign In   
  • Create Account

robotichrist

Member Since 23 Apr 2006
Offline Last Active Sep 07 2011 04:54 PM

Topics I've Started

What features do you want in a procedural mesh library?

17 August 2011 - 01:10 AM

Hi everyone!

In my spare time, I've started working on a C++0x library for triangulated meshes built on top of the wonderful Eigen library: http://eigen.tuxfami...itle=Main_Page. It is in the *very* early stages right now, but it already has a few nifty/useful features including:

  • Templated data structures and algorithms for custom vertex formats
  • O(1) topological operations (neighborhood queries, Euler operators, vertex/face insertion/deletion)
  • Flat array based memory layout for direct OpenGL/DirectX interoperability (no copy/traversal needed)
  • PLY serialization (with more file types to come later)
  • Multiple algorithms including:
  • Isosurface extraction
  • Connected components labeling
  • Mesh repair
The ultimate goal is to create a replacement for the trimesh library which brings in some of the features from trimesh2, (hopefully with less cruft and better performance by exploiting some of the new features in C++0x), and adds in some new features focused on solid modeling (ie CSG, point membership tests, etc.). Right now I am working on adding some code to use BSP trees to do point membership classification, interference testing and surface trimming. If things go well, it could eventually turn into something like a light weight drop-in CAD kernel that can be easily embedded into existing C++ applications. Here is a list of planned features:

  • Solid modeling operations (CSG, point membership, Minkowski sums, etc.)
  • Subdivision surfaces
  • Mesh simplification
  • ...?
My main focus right now is on developing tools that are useful for procedural generation and mesh analysis. While it is true that the sheer complexity of doing even very simple solid modeling tasks (for example, checking point membersip) with meshes tends to scare people away from using them for solid modeling tasks, there are still plenty of good technical reasons to prefer meshes over more naive volumetric representations (ie regular voxel grids or CSG trees). Meshes are of course much easier to render, they also take up a fraction of the space of octrees/voxel grids and do not suffer from aliasing due to undersampling. Moreover, many useful tasks, like CSG, are much faster on meshes (though they are also way more complicated). Hopefully, by making a convenient library this last point can be addressed as well!

In case anyone wants to check it out or try contributing to the code, here is the github repository:

https://github.com/m...senko/TriMesh0x

When will the image of the day go back online?

18 April 2011 - 11:05 PM

Don't mean to nag or anything, but it was always my favorite feature on the site. Are there any plans to resurrect it in the near future? I really do miss seeing all the cool pictures of everyone's projects on the front page.

How good are the best voxel compression algorithms?

07 April 2011 - 08:48 AM

This is basically an informal sort of poll, but it is based on a comment that I made in the unlimited detail thread. The main motivation for asking this question should be self evident; since the ultimate bottleneck in any voxel based system is memory requirements. Better compression means higher detail. larger environments, and ultimately better performance (due to fewer cache misses and less data to move around). Also for distributed voxel engines, memory is the key limiting factor in the amount of bandwidth used.

This suggests that the biggest, and most important problem that any new voxel engine needs to solve is *not* how to max out rendering performance, but rather to figure out how they are going to efficiently compress all that volumetric data. In fact, somewhat counter intuitively I would suggest that highly compressed voxel data may even be *faster* to render than uncompressed data. This was certainly true back in the old days of sprite based rendering, where using run-length encoded sprites would allow you to blit them to the screen much more quickly than doing a brute force bit-by-bit memcpy. The reason for this is that modern CPUs and GPUs are primarily bandwidth limited, and so the trade-off between memory vs. computation is becoming increasingly lopsided in favor of the latter.

One common technique for encoding voxel data is to use either run-length encoding or octrees. The underlying assumption in each case is that the voxel data is mostly piecewise constant, and so you only need to keep track of the places where it transitions between these different regions. The main reason that this works is that the boundary for the object is much smaller than the actual object (typically on the order of O(n^{(d-1)/d}) in d-dimensions). However, if you take this idea to the ultimate conclusion, then the best that you can reasonably hope for is to compress the voxel data down to just the information encoded on the boundary; or in other words reduce it to a B-rep. This is slightly annoying, since it seems to just take you back to polygon/B-rep objects again :(

So, I am asking this question to try to figure out if it is actually possible to do much better than this? There are a bunch of `crank' videos on youtube with people claiming to do so (ie Unlimited Detail and Atomengine), but there are many reasons to be skeptical about their claims (such as a lack of independent verification, and questionable motivations on their part to attract investors). So, are there any credible research results which really try to honestly investigate this problem? The work by Sven Forstmann (spacerat on these forums) comes to mind, but I don't think he has written anything lately. There is also sparse voxel octrees, but I am not convinced that using a simple octree compression scheme will get you anything better than what can already be done with B-reps (see the above comment).

What are your thoughts on this issue?

State of the art in network physics?

22 March 2011 - 03:44 PM

Not sure whether to post this here or in the multiplayer forum, but I am curious about the current state of networked physics for video games. There is of course Glenn Fielder's famous blog:

http://gafferongames.com/


But outside this, I am having trouble finding pretty much anything outside some odd forum posts. So, does anyone here have some papers or references on this subject?

Fourier Collision Detection

07 March 2011 - 12:44 PM

Hello everyone,

I know that collision detection is a frequent topic of discussion here, so I figured that this might be a good forum to present an early version of some research that I have been working on and hopefully get some early feedback. The basic idea here is to rewrite the collision test as a convolution, then use a low pass filter to discretize the geometry of the solids. These convolutions can then be evaluated and differentiated exactly using a finite Fourier summation, which leads to a new collision test / Jacobian calculation that scales with the accuracy of the solids (ie the number of coefficients is proportional to the running time and to the accuracy of the test). I believe that this technique has the potential to be much simpler than more conventional contact/geometry based collision detection schemes and could lead to a new class of solvers for approximate collision detection/rigid body dynamics. Here is a link to the early draft of the paper:


http://sal-cnc.me.wisc.edu/index.php?option=com_remository&Itemid=143&func=fileinfo&id=191


This is still an early version, but I'd definitely appreciate any and all comments. :) I am also curious to know what those who have more experience with developing rigid body dynamics engines may think about this approach, and what their views are on the general problem.

PARTNERS