• Advertisement

3D Mesh surface area calculation errors after subdividing

Recommended Posts

Hello!

I have this strange issue with mesh surface area calculation. Surface area increases, if i "subdivide" mesh to be more detailed and add more polygons, even tho the relative size remains the same.
Code is rather simple and the formula itself doesn't matter as much as differentiating results based on polygon count.

float total_area = 0;
int indice_count = finalMeshIndices.length;
for (int i = 0; i < indice_count; i += 3)
{
  float area = 0;

  Vector3 p1 = new Vector3(finalMeshVertexFloatArray[i + 0], finalMeshVertexFloatArray[i + 1], finalMeshVertexFloatArray[i + 2]);
  Vector3 p2 = new Vector3(finalMeshVertexFloatArray[i + 10], finalMeshVertexFloatArray[i + 11], finalMeshVertexFloatArray[i + 12]);
  Vector3 p3 = new Vector3(finalMeshVertexFloatArray[i + 20], finalMeshVertexFloatArray[i + 21], finalMeshVertexFloatArray[i + 22]);

  Vector3 e1 = p2.sub(p1);
  Vector3 e2 = p3.sub(p1);
  Vector3 e3 = e1.crs(e2);

  area = (float) (0.5 * Math.sqrt(e3.x * e3.x + e3.y * e3.y + e3.z * e3.z));

  if (Float.isNaN(area)) System.out.println("NaN... The fuck?");

  total_area += area;
}

I think the issue is with float precision, because i have the same issue if i take a model and flip it.

Here is visual representation
image.png.e7526cf68e7f6b037046b2f9524db25a.png

image.png.eeb1e102509c1c7e29027fa8e7bb194b.png

If i take a cube 1,1,1 its area is ~5
If i upscale it 3 times, its area becomes ~19

So it seems that float precision is a culprit here. At least i guess so.

Edited by Sollum

Share this post


Link to post
Share on other sites
Advertisement

For your sphere this is not surprising. You aren't just dividing the triangles, you are also moving the vertices outwards to lie on the sphere surface (otherwise subdividing wouldn't make the shape any rounder). And when you move those vertices outwards, you are increasing the volume of the shape (and hence the surface area).

Share this post


Link to post
Share on other sites

Sorry for the late reply.

Yes, i though the same, but difference is way way way way way way to big in my opinion.
~5 times bigger surface area from subdividing 2 or 3 times.  And the volume itself becomes ~3 times bigger.

Different example: 
image.png.e2a8f69a2e565f7bcb0448642431d97f.pngimage.png.801b6ab9bf7d53779ddcd69e6efac434.png
 

Same model, only flipped in 3d modeling tool and loaded as a separate model.
Its area and volume differs.

Share this post


Link to post
Share on other sites

Ok, this has to be an issue with how you are calculating the area. Looking at the function in your original post, one thing looks strange to me. The code references an index array, but then it doesn't use the index array to lookup the vertices. Is this an indexed mesh or not?

Share this post


Link to post
Share on other sites
On 7.2.2018 at 8:38 PM, Sollum said:

Vector3 p2 = new Vector3(finalMeshVertexFloatArray[i + 10], finalMeshVertexFloatArray[i + 11], finalMeshVertexFloatArray[i + 12]);

Agree, looks like an indexing bug. Your vertices have 10 floats? Increasing i by 3 makes little sense, probably forgot to read the index?

for (int i = 0; i < indice_count; i += 3)

But why do you allocate the vertices with new? (even if that's intended, you forgot to delete it.) 

:)

Edited by JoeJ

Share this post


Link to post
Share on other sites

Thanks lads!

Crap, what a lame mistake that was. Fixing it was fast and easy.
 

int i_indiceMod = 10 * i;
Vector3 p1 = new Vector3(finalMeshVertexFloatArray[i_indiceMod + 0], finalMeshVertexFloatArray[i_indiceMod + 1], finalMeshVertexFloatArray[i_indiceMod + 2]);
Vector3 p2 = new Vector3(finalMeshVertexFloatArray[i_indiceMod + 10], finalMeshVertexFloatArray[i_indiceMod + 11], finalMeshVertexFloatArray[i_indiceMod + 12]);
Vector3 p3 = new Vector3(finalMeshVertexFloatArray[i_indiceMod + 20], finalMeshVertexFloatArray[i_indiceMod + 21], finalMeshVertexFloatArray[i_indiceMod + 22]);

 

Everything now seems normal, and both spheres seem to have only marginal difference in surface area. Tho the first one seems to have bigger surface area, but somehow it makes sens to me.
image.png.df5e6cbc8915724c0fd3cfc483749898.pngimage.png.a48574bc84f6842a88ebcf1fa1fdb2dd.png

 

Flipped models are also fixed, but there is small 0.000001 error :D
image.png.3f01332e492048b004336cf5f678474c.pngimage.png.2123132769013595be0d98d520ac32c1.png

For now I am using "new" operator, since calculations are done only once, whilst loading the models. Latter on ill simply use raw vertice data for calclulations.
 

Edited by Sollum

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By _RoboCat_
      Hi,
      Can anyone point me into good direction how to resolve this?
      I have flat mesh made from many quads (size 1x1 each) each split into 2 triangles. (made procedural)
      What i want to achieve is : "merge" small quads into bigger ones (show on picture 01), English is not my mother language and my search got no result... maybe i just form question wrong.
      i have array[][] where i store "map" information, for now i'm looking for blobs of same value in it -> and then for each position i create 1 quad. and on end create mesh from all.
      is there any good algorithm for creating mesh between random points on same plane? less triangles better. Or "de-tesselate" this to bigger/less triangles/quads?
      Also i would like to find "edges" and create "faces" between edge points (picture 02 shows what i want to achieve).
      No need for whole code, just if someone can point me in good direction would be nice.
      Thanks


    • By Karol Plewa
      Hi, 
       
      I am working on a project where I'm trying to use Forward Plus Rendering on point lights. I have a simple reflective scene with many point lights moving around it. I am using effects file (.fx) to keep my shaders in one place. I am having a problem with Compute Shader code. I cannot get it to work properly and calculate the tiles and lighting properly. 
       
      Is there anyone that is wishing to help me set up my compute shader?
      Thank you in advance for any replies and interest!
    • By PhillipHamlyn
      Hi
      I have a procedurally generated tiled landscape, and want to apply 'regional' information to the tiles at runtime; so Forests, Roads - pretty much anything that could be defined as a 'region'. Up until now I've done this by creating a mesh defining the 'region' on the CPU and interrogating that mesh during the landscape tile generation; I then add regional information to the landscape tile via a series of Vertex boolean properties. For each landscape tile vertex I do a ray-mesh intersect into the 'region' mesh and get some value from that mesh.

      For example my landscape vertex could be;
      struct Vtx { Vector3 Position; bool IsForest; bool IsRoad; bool IsRiver; } I would then have a region mesh defining a forest, another defining rivers etc. When generating my landscape veretexes I do an intersect check on the various 'region' meshes to see what kind of landscape that vertex falls within.

      My ray-mesh intersect code isn't particularly fast, and there may be many 'region' meshes to interrogate, and I want to see if I can move this work onto the GPU, so that when I create a set of tile vertexes I can call a compute/other shader and pass the region mesh to it, and interrogate that mesh inside the shader. The output would be a buffer where all the landscape vertex boolean values have been filled in.

      The way I see this being done is to pass in two RWStucturedBuffer to a compute shader, one containing the landscape vertexes, and the other containing some definition of the region mesh, (possibly the region might consist of two buffers containing a set of positions and indexes). The compute shader would do a ray-mesh intersect check on each landscape vertex and would set the boolean flags on a corresponding output buffer.

      In theory this is a parallelisable operation (no one landscape vertex relies on another for its values) but I've not seen any examples of a ray-mesh intersect being done in a compute shader; so I'm wondering if my approach is wrong, and the reason I've not seen any examples, is because no-one does it that way. If anyone can comment on;
      Is this a really bad idea ? If no-one does it that way, does everyone use a Texture to define this kind of 'region' information ? If so - given I've only got a small number of possible types of region, what Texture Format would be appropriate, as 32bits seems really wasteful. Is there a common other approach to adding information to a basic height-mapped tile system that would perform well for runtime generated tiles ? Thanks
      Phillip
    • By GytisDev
      Hello,
      without going into any details I am looking for any articles or blogs or advice about city building and RTS games in general. I tried to search for these on my own, but would like to see your input also. I want to make a very simple version of a game like Banished or Kingdoms and Castles,  where I would be able to place like two types of buildings, make farms and cut trees for resources while controlling a single worker. I have some problem understanding how these games works in the back-end: how various data can be stored about the map and objects, how grids works, implementing work system (like a little cube (human) walks to a tree and cuts it) and so on. I am also pretty confident in my programming capabilities for such a game. Sorry if I make any mistakes, English is not my native language.
      Thank you in advance.
  • Advertisement