I have to merge (in 2D) images in order to obtain a high resolution texture map for human bodies.
My images correspond to the camera images provided by a 3D scanner: such images, taken from different views, represent (a part
of) the body plus the background.
I know how to map these images on a coherent map, but I need to obtain a full "seamless" effect when merging them -- namely, avoiding patch discontinuities due to different lighting conditions.
Blending techniques often follow a "feathering weight" technique: for each original 2D camera image, a feathering function f(x) is calculated
on image pixels x s.t. f(x)=0 outside the object and f(x) grows linearly to a maximum value of 1 within the object.
Any idea about how modeling f(x) by an appropriate function?
feathering weights for texture blending
1 reply to this topic
Members - Reputation: 2020
Posted 25 October 2012 - 08:20 AM
Well, if your images are axial views, taken along the X, Y and Z axes, then you can use triplanar projection, and use the normal of a given point on the model to calculate 3 blend weights for each of the projections. You could theoretically extend this to any number of projections, as long as you know the vector or axis along which each image is projected. To calculate a blend factor for any given point, you take the dot product of the geometry normal against the projection axis of the image. You have to calculate the blends for every axis/image projection, then normalize all of the blends so that they all sum up to 1 in order to avoid any lightening/darkening of the images. (Just sum all of the blend factors together, then divide each factor by this sum.)