Jump to content
• Advertisement  # tobiasjs

Member

14

148 Neutral

## About tobiasjs

• Rank
Member
1. In one dimension, your ray is going to travel through the near plane and then through the far plane. In order to get into the cube, you need to pass the near plane in all three dimensions without passing any far plane. So the max(tmin) tells you the entry point, and the min(tmax) tells you the exit. If you exit before you enter then you don't intersect.   Note, however, that this code isn't going to do so well when your ray is axis aligned (division by zero in calculating invR).
2. I'll preface this by saying that robust triangulation is very difficult. Whatever you implement will almost certainly fail for some inputs.   FIST (http://www.cosy.sbg.ac.at/~held/projects/triang/triang.html) is an algorithm that is comparatively easy to implement. It relies on ear clipping, as does yours. The paper is good, and pretty easy to read.   I'm not completely sure what your problem is, from reading your description. I have comments along two lines:   1) Performance:   Given you're dealing with 1e6 points, you really should use some kind of spatial subdivision structure to accelerate your search for points inside the ear you intend to clip.   When you clip an ear of your polygon, having to move ~1e6 indices in V seems wasteful. A linked list might actually be an appropriate structure here (especially if you don't allocate nodes individually, but as a block). At the very least, start working backwards from the back of the array, rather than the front, so that you have less data to shift around.   Exploit locality of reference by not moving v when you successfully clip an ear.   2) Triangulation quality:   It seems like you're worried about producing large triangles that make it hard to render small regions of the total triangulation efficiently. If you think about a circle, there are basically two approaches to producing a triangulation. One will produce a fan, with long thin triangles. The other will produce lots of small triangles at the edge, and some large triangles at the centre. Any way you triangulate a circle will produce a result that is hard to render locally. In general I believe you won't be able to avoid this.   You could attempt to improve the quality of your triangulation by looking at adjacent triangles (that form a quad) and flipping the internal edge. You can treat this as an optimization problem - define a flip cost as being the differences in summed areas of the bounding boxes of the two triangles before and after flipping, and then greedily flip triangle pairs while you can decrease the cost. Other cost functions (such as considering how far the triangles deviate from equilateral) will also work.   I suspect that what you want to do is to add points to smooth out the triangulation. This is not easy to implement, but you can take a look at the Triangle library, by Jonathan Shewchuk (http://www.cs.cmu.edu/~quake/triangle.html) for an implementation of conforming delaunay triangulation (which is what you want). The Triangle code is free for non-commercial use, so this might be an option for you.    Toby.
3. Vectors have more cache coherent access patterns than heaps. Also if your help contains objects rather than pointers, then swaps are expensive operations. Unless your nodes are simple, your heap should definitely consist of pointers to nodes. With a bit of magic you could also wrap the pointers so that they updated the nodes to keep track of their position of their pointer in the heap, to make updates more efficient. To properly understand whether a heap is justified you need to collect some statistics about the distribution of counts of open nodes at each exploration step.
4. The good thing is that you don't need to recompute paths all that often. You should be able to implement it efficiently enough for realtime use.   Definitely you should use a heap rather than bubble sort. Bubble sort is only ever a good idea for almost sorted data. Its average time complexity is O(n^2), which means that it will quickly dominate the pathfinding. A heap, in contrast, can be used to completely sort in O(nlog(n)) time, and incremental operations like adding and removing values, it's O(log(n)).   Another useful optimization is to break your map up into convex regions, because the shortest path in a convex region is straightforward. Thus you can take bigger steps (crossing an entire convex space in one move) and potentially reach your goal with less exploration.
5. In order to find the optimal route, you need to implement A* somewhat differently.   First off, your heuristic needs to underestimate the cost after a candidate step - at the moment, I suspect that it doesn't.   Secondly, to find the optimal path, the A* algorithm needs to explore completely to the target. You need to keep a heap of <fCost,path> tuples, and iterate popping the least cost path off the heap, enumerating all the valid moves from that point, and pushing updated tuples onto the heap until you reach the goal.   You can't make a single iteration of the above loop and then make a movement choice based upon that.   A* is essentially a breadth-first search of the tree of moves, with the heuristic serving to delay searching likely unprofitable paths.   What it sounds like you're implementing currently (by fixing a move choice at each step) is more like depth-first search.
6. Just add struct gx_quadpatch_s; above the function typedef to forward declare the struct.
7. The triangle is defined as the set of points B + sE0 + tE1, where s >= 0, t >=0 and s+t <= 1. s and t are two components of the triangle barycentric coordinates.   B is a vertex (you can think of it as the 'base' vertex of the triangle) of the triangle. E0 and E1 are two edge vectors. If you say that the triangle is defined by vertices A,B,C, then E0 and E1 are C-B and A-B respectively (assuming right hand rule ordering of the triangle).   If you want a complete implementation to look at, you can find one here:   https://code.google.com/p/carve/source/browse/lib/geom.cpp
8. Another alternative would be to implement computation of one z slice as a shader. You render your model (orthographic projection, no back face culling), rejecting z values less than your current slice, and counting the number of times each pixel is hit. Any odd values represent positions of cubes in this slice.
9. Yes, except that the closest triangle test is a bit more complex than just testing vertices, because the closest point may be on an edge or within a face, and you have to account for that. The link I posted provides an algorithm that performs the full test.
10. Essentially you want to compute, for a given point in 3d space, whether it's inside or outside your mesh (which needs to be watertight for this to be defined).   For a triangle mesh, a robust way to do this is find the closest point on the mesh to your query point, and then test the dot product of the triangle normal against the vector from the closest point to your query (negative sign = inside, positive = outside, 0 = on the surface).   Alternatively, you can cast a ray in a random direction from your query, and count the number  of crossings of the mesh. This is a little tricky because you have to worry about intersections at an edge or vertex of the mesh, as well as glancing intersections.   Taking the first option, you would need to be able to calculate the closest point on a triangle to a given point (see: http://www.geometrictools.com/Documentation/DistancePoint3Triangle3.pdf) and then use an appropriate spatial query structure (like an octree, r-tree or, kd-tree) to speed up the testing.   Then you simply evaluate this at regular grid coordinates to generate your voxel data.
• Advertisement
×

## Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!