• Content count

  • Joined

  • Last visited

Community Reputation

193 Neutral

About robotichrist

  • Rank
  1. Hi everyone! In my spare time, I've started working on a C++0x library for triangulated meshes built on top of the wonderful Eigen library: [url=""]http://eigen.tuxfami...itle=Main_Page.[/url] It is in the *very* early stages right now, but it already has a few nifty/useful features including: [list][*]Templated data structures and algorithms for custom vertex formats[*]O(1) topological operations (neighborhood queries, Euler operators, vertex/face insertion/deletion)[*]Flat array based memory layout for direct OpenGL/DirectX interoperability (no copy/traversal needed)[*]PLY serialization (with more file types to come later)[*]Multiple algorithms including:[*]Isosurface extraction[*]Connected components labeling[*]Mesh repair[/list] The ultimate goal is to create a replacement for the trimesh library which brings in some of the features from trimesh2, (hopefully with less cruft and better performance by exploiting some of the new features in C++0x), and adds in some new features focused on solid modeling (ie CSG, point membership tests, etc.). Right now I am working on adding some code to use BSP trees to do point membership classification, interference testing and surface trimming. If things go well, it could eventually turn into something like a light weight drop-in CAD kernel that can be easily embedded into existing C++ applications. Here is a list of planned features: [list][*]Solid modeling operations (CSG, point membership, Minkowski sums, etc.)[*]Subdivision surfaces[*]Mesh simplification[*]...?[/list] My main focus right now is on developing tools that are useful for procedural generation and mesh analysis. While it is true that the sheer complexity of doing even very simple solid modeling tasks (for example, checking point membersip) with meshes tends to scare people away from using them for solid modeling tasks, there are still plenty of good technical reasons to prefer meshes over more naive volumetric representations (ie regular voxel grids or CSG trees). Meshes are of course much easier to render, they also take up a fraction of the space of octrees/voxel grids and do not suffer from aliasing due to undersampling. Moreover, many useful tasks, like CSG, are much faster on meshes (though they are also way more complicated). Hopefully, by making a convenient library this last point can be addressed as well! In case anyone wants to check it out or try contributing to the code, here is the github repository: [url=""][/url]
  2. Bounding Sphere construction

    Finding a minimal bounding sphere is easy. There is a trivial n^4 time algorithm for doing it, just search over all sets of tetrahedra over the convex hull of the object, take the smallest enclosing sphere of each tet, and then take the smallest one that includes all points. There are also the special cases for the situation where the bounding sphere is determined by two and 3 points respectively, though they follow similarly. There are several much faster linear time algorithms, though you can just look those up on wikipedia: [url=""]http://en.wikipedia...._circle_problem[/url]
  3. Voxel / Marching Cubes

    Here is a really dead simple/bone headed algorithm that I've used to do level-set surface reconstruction. Unlike marching cubes/tets/whatever it has no special cases, though the trade off is that it can end up being a bit slower. This is how it works: 1. First, generate an initial coarse mesh that covers the level set. This can be as simple as just covering it with uniform boxes and then gluing them together along coplanar faces, throwing out the ones which do not cross the boundary. 2. Next, take your coarse mesh and perturb the vertices so that they lie on the level set boundary. To do this, you just use a non-linear root finding method, such as Newton's method/gradient descent. 3. Take the resulting mesh that you got from step 2 and apply a surface subdivision algorithm. 4. Go to step 2 and repeat until your mesh is reasonably fine. Of course where this can get into trouble is if your initial guess/mesh was too coarse (ie you cover up a hole or something). So, there are probably some ways to refine this to make it a bit more efficient and robust. However, the general idea is to just solve the problem like you would any ordinary type of non-linear root finding problem; which is to just do gradient descent and iterate until it converges. EDIT: -2 ? Really? I never claimed this method was particular good or even that original ( I am sure that someone has tried doing this before... ), but it is simple and it works. Could someone please explain what their issue is with this post? I would be happy to stand corrected if there is something wrong with what I wrote.
  4. Good 3D Lib for Java browser game/Basic

    [quote name='soaren' timestamp='1306347041' post='4815697'] Okay, I need some advice here. I am intermediate-expert in Java. I'm going to use Java for the game. I have programmed some simple singleplayer 2d rpgs (and platformers). I'm looking to jump into basic 3d multiplayer game that runs in a browser. What I need: A *3D* graphics library that is low(er) level that will run on any browser (okay, maybe not IE), and it capable of graphicsthatarenicerthatrunescape lol. I was thinking OpenGL, but there are very few tutorials on using opengl in a browser (honestly, I'm getting the hang of OpenGL, and would prefer this, but I have no idea how to get it in a browser), I looked into WebGL, but the amount of documentation on using it with java is absolutely PATHETIC. Which one should I use? (Keeping loading times down is a plus). lwjgl looks alright... But there's barely any documentation whatsoever, and i doubt it can be used in a browser. JOGL seems to be kinda what I'm aiming for, but again, next to no documentation whatsoever. And it appears Java3d has been dead for years =\ Any other ideas? Also, what language should I use for client-server communication? PHP? I honestly know next to nothing on this topic. The only real requirement I have for the client-server is that it is easily secured (or rather, EASIER to secure). Also if you know of any good tutorials on *3D* ENGINE/GAME framework design (specifically in Java) that would be awesome. There are SOME tutorials on this in 2D, but I'm looking for some 3D tutorials. thanks Devlin [/quote] What makes you think the WebGL documents are bad? The official spec by Khronos is very well written: [url=""]https://www.khronos....ebgl/specs/1.0/[/url] If you know OpenGL, WebGL is pretty much the exact same thing. I don't see how you could make the case that either JOGL/LWJGL are easier, since you would need to learn at least as much to use either of them . For your server, you could write that in pretty much anything you want. All it takes is some code to process HTTP headers, which is really easy (or you could just find a library that does it for you). Depending on your requirements, you may want to use either a static web server (if you don't have any dynamic content), a framework (if you only have simple forms/stateless transactions), or roll your own HTTP server (if you need to use websockets/comet style passing for low latency updates to some complex internal state). EDIT: Now if your question is on how to use it with Java, then I am afraid you are terribly confused. The idea behind WebGL is to have an OpenGL interface for Java[b]script --[/b] not Java. I don't think it would even be possible without an enormous (and probably wasted) effort to connect WebGL to Java.
  5. functional programming languages

    I don't want to start a language war, but even though I really do like functional programming it is hard to make it work in any application that needs to push around big blobs of semi-structured binary data (such as games, numerical code, or general graphics/physics/image processing types of stuff). The reason it ends up being so difficult is that you end up implicitly needing to make lots of copies of your data to get stuff to work without side effects. When the things that you are copying are small and don't persist for long, then this is no big deal. However with big vectors and matrices this is definitely not true, and it becomes a very expensive proposition. Even if you are careful and use things like monads to abstract some of this away, it is still very easy to make a mistake and accidentally put a giant memcpy right in the middle of your main loop (and some languages will do this silently, even different behaviors depending on what compiler options you use!) Some languages (like Erlang) try to get around this by replacing random access arrays with binary trees. This has the advantage that local updates can be made without doing a complete copy, but the disadvantage is that random accesses are much slower, and the data structures tend to exhibit poor locality/cache performance and are typically several orders of magnitude slower. This is really a shame, since I like the idea of using functional programming for things like organizing game logic and entity behavior, and I think that functional code is usually more elegant and readable than imperative code. However, the awkwardness of using FP for graphics is just too big a downside to make it practical. One way I could see it working out is if you use a functional backend for say a server (such as in a MMO-browser based game). In this situation, the game logic could be sufficiently abstracted away from your display that it might be feasible to use something like Erlang or Haskell. Of course if it turns out that later on you do need to do physics on the server-side, then you will probably end up getting screwed.
  6. [quote name='zacaj' timestamp='1306268454' post='4815286'] You could try using Kriging ([url=""][/url]) on the black holes, and adding other random control points for the field. Ive never actually used it, but it seems like it would work... [/quote] That's a neat trick!
  7. Physics Estimation Question

    It is an interesting idea, and I think something that there is not a lot of good literature on this subject yet. There are a few things you can fake using these types of methods, but it is more an art than a science. For example, if you want stuff to look like you have rigid body dynamics/inertia working correctly, it is often enough just to impart a constant angular velocity to a rigid body and then let it spin freely (even though this may not be the correct motion as described by the Euler equations/Jacobi theta function). Collisions and interactions are generally harder to fake, but you may be interested in reading the following paper: SW Hsu, J Keyser. "Statistical simulation of rigid bodies" Eurographics/SIGGRAPH Symposium on Computer Animation (2009). [url=""]PDF[/url] Another neat idea that I've seen used successfully is the concept of curl noise for fluid/flow field simulation. There was a neat paper on it at SIGGRAPH 2007 which describes the basics of the technique: R Bridson, J Hourihan, M Nordenstam. "Curl noise for procedural fluid flow", SGGRAPH (2007). [url=""]PDF[/url] A slight refinement of this method is divergence free noise, which achieves similar looking results: I DeWolf, "Divergence free noise" Technical Report [url=""]PDF[/url] In all these methods the general idea seems to be that you look for some invariant or statistical properties you want the physics to satisfy (which is not strict), then instead of trying to solve it exactly you just pick a random solution that satisfies these constraints. Of course this is probably not going to work too well with rigid body dynamics, since the entire work of solving a system is wrapped up in the constraints; but it may work better for doing things like faking fractures, particles or complex turbulent phenomena which are chaotically related to their initial conditions. And of course none of these things do exactly what you are asking, but maybe they can help inspire you to find a workable solution. [font="arial, sans-serif"] [/font] [font="arial, sans-serif"] [/font]
  8. Computer science or game programming?

    [quote name='Domx' timestamp='1301572382' post='4792546'] ... especially in a field such as CS, which is constantly evolving. [/quote] Sorry, I don't mean to pick on you, but I would like to take a moment to point out that while I agree with your overall conclusion, I strongly disagree with you on this point. The fundamentals of CS are not really evolving much today. Basic algorithms, data structures, analysis and computer architecture have not changed in the past 30 years, nor does it seem likely that they are going to go through any radical updates in the forseeable future. Of course the cutting edge of CS is rapidly advancing, but this is not the sort of stuff that you will learn in a typical undergrad CS curriculum, nor is it even that important (no offense to those involved in research) to 99% of the general programming population out there. This is exactly why a CS degree is so valuable; because these foundational concepts are not likely to change and are very useful in a wide array of situations.
  9. Thesis In Physics

    [quote name='karsnen' timestamp='1305224294' post='4809880'] @PREDATOR_UK Thank you. I am searching for ref materials right now. It helped as a start from me:). [/quote] Is this for an undergraduate thesis? You could start with Brian Mirtich's PhD thesis: [url=""][/url] , or check any of the references listed in the thread stickied at the top of the forum.
  10. SQT Quaternions

    [quote name='alvaro' timestamp='1303503314' post='4801728'] You would only pat the sqrt when you are setting up the transformation. The problem I see with the suggestion of incorporating the scale into the quaternion is that now you can't renormalize the quaternion after applying a rotation. I would get rid of the scale for different reasons: Perhaps people with more practical experience can let us know otherwise, but I don't think the scale functionality is all that useful. Most objects involved in a game rotate and translate all the time, but they usually don't shrink. [/quote] That may be. I would be inclined to agree with you that for rigid objects, having an extra scale degree of freedom is an unnecessary liability. Of course, disregarding any of these externalities; if you *really* want a scale degree of freedom (for whatever reason), then you are probably better off lumping it in with the quaternion since it will be more efficient.
  11. SQT Quaternions

    [quote name='Dirk Gregorius' timestamp='1303486216' post='4801636'] Because you probably don't want the sqrt() for every transformation [/quote] No, you are misunderstanding me. What I am saying is that instead of storing a separate float for the scale, you precalculate that guy and store it inside the quaternion. That way you don't have to do the extra 3 multiplies per vector. The new class would work like this: struct SQTTransform { Vec3 translation; Quat q; float get_scale() const { return dot(q, q); } void set_scale(float s) { q *= sqrt(s); } }; That way, you only need to deal with the square root once; which is whenever you are setting up the transformation. Not when you are transforming the vectors. EDIT: Changed the wording of the last sentence to make it clear that I am not talking about transforming vectors with the square root.
  12. Physics + Threading

    To acheive parallelism, many systems today use task based multithreading (for example you could try Intel's TBB) or else try to push computations onto the GPU. These more flexible methods allow you to focus on algorithmic and performance issues without having to make too many difficult architectural decisions. That said, making physics parallel (at least rigid body dynamics) is very difficult. One of the fundamental obstacles to parallelizing rigid body dynamics is that it basically requires solving a system of unilateral constraints, which ultimately reduce down to linear programming. There are many techniques for doing this (such as sequential impulses), but the one thing they all have in common is that they are not parallel. There is a good theoretical reason why this is the case; linear programming is [url=""]P-complete[/url], which means that if it could be efficiently parallelized then all programs could be automatically parallelized. As a result, it is not likely that there is a good general solution to this problem. However, that doesn't mean that parallelizing rigid body dynamics is impossible, merely that you shouldn't expect to find any good generic solution that works without much effort. You can potentially take advantage of parallelism if you can decouple your rigid body dynamics simulation or break it up into isolated instances. This is probably the case in many real world examples, but in the worst case you can't count on it (for example, stacking is a situation which really stresses this type of behavior, and it is also one of the situations that many rigid body dynamics simulators get wrongs). You can also bring in some parallelism at the level of collision detection, which can be used to prune down the number of constraints that you need to solve. As a result, it is still possible to derive some benefit from parallelization, but it is a hard won battle. Of course, if you don't care about rigid body dynamics, then many other physical systems are easy to parallelize. Typically unconstrained systems like PDEs or particle systems are very easy to push onto the GPU/farm out to multiple threads and get running at massive scales.
  13. SQT Quaternions

    Why not get rid of the scale parameter and just use a denormalized quaternion? After all, s * (q * v * qc) = (sqrt(s) * q) * v * (sqrt(s) * qc).
  14. Don't mean to nag or anything, but it was always my favorite feature on the site. Are there any plans to resurrect it in the near future? I really do miss seeing all the cool pictures of everyone's projects on the front page.
  15. nearest neighbour 1-bit bitmap

    I don't know if you are still working on this problem, but you may want to take a look at this paper: Gionis, Indyk and Motwani, "Similarity Search in Higher Dimensions via Hashing", Proc. of VLDB (1999) [url=""]PDF[/url] As far as I know, it is the current best solution to the hamming distance nearest neighbor problem.