Alright, this is probably something fairly simple for those well-versed in mesh/vertex manipulation but I'm having trouble wrapping my head around a solution and haven't found anything else close enough to my problem to get me started in the right direction.
I have an existing mesh plane, which could be either a rectangle or any convex/concave polygon. What I need to do is create an identical mesh, and then effectively scale that mesh down to create a border of 'd' distance between the edges of the original and duplicate mesh. Duplicating the mesh and its vertices/triangles isn't a problem, but properly scaling it is and I don't have enough experience in graphics programming to know how to scale all verts as a whole, still being centered on its original position, etc.
Normally, I would simply scale an object down by hand in an editor until it "looked right" or do it in a 3d program, but in this instance it needs to be inset dynamically at runtime by an input value, and be physically correct. I'm hoping I posted this in the right place since it's a little more specific than a generic 'getting started' thread. Thanks for any help in advance, I've been racking my brain to try to figure it out but graphics programming isn't my forte and I definitely want to nail it down!
Yeah, you'll definitely need to offset the vertices by some factor in the direction of their surface normals. Scaling the entire model offsets vertices based on the origin of the entire model, which is not the same. This is especially noticeable on concave shapes. Under a scaling transform every vertex will move away from the object's origin, but vertices whose surface normal is pointing towards the origin will need to move towards it for an offset (aka dilate) operation.
Currently working on an open world survival RPG - For info check out my Development blog:ByteWrangler
Making two layers using the surface normals is pretty easy but here are some pitfalls:
1) Which way do you let the user scale the derived surface? Or do you support both thicker and thinner? This effects the lighting as you need to keep the winding order consistent.
2) If the surface is open like a tube, you will need to stitch the layers together at the ends. This is much more of a challenge than just deriving a thicker layer.
3) If you supported cutting holes and stitching pieces together, there are all-new problems when you add thickness, as the two layers (each featuring seamless articulation) are no longer simply offset by the normals, the projected region of intersection needs to be scaled about its midpoint and reprojected onto a slightly different surface.
Well, this only applies to a simple polygon; it can be concave or convex, but not cross itself or have "holes". What I ended up doing was tracing the perimeter as all vertices in this circumstance are on the border of the poly plane. I took the normal of the 'edge' vector between the current vertex to the last, and then the normal of the edge vector from the current to the next, scaled each by the amount of inset needed and then stored that offset in a temp array, which was then applied after looping through them all. This wouldn't work exactly as desired for concave polys and would create a translation that was in the wrong direction, so I just did a 'point in polygon' check and if was out of the original poly's surface space I used the inverse of that translation to get it in the right position. This should work with 3d and complex polygons as well with a little more effort, I just didn't go that far this time. Thanks for the help everyone!