Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

dnagcat

2d to 3d translation

This topic is 5403 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I work at a small company which produces a machine that uses 2d images and produces a carving into a surface... I can''t say much more than this. Anyway carving logos and clipart is easy, but natural images have been quite a challenge for us. After months of exausting research and trying multiple algorithms I believe we are trying to achieve the impossible (of course it is hard to convince my boss of this)... Here is one such scenerio. We were given an image from a partner in China to produce a carving of himself and his girlfriend. I failed to realize the scope of this problem until we attempted to produce the final result.. The Problem: We take our cues from pixel intensities in producing our hieghtmap... but of course a 2d image only has positional cues in reference to a light source (if you''re luck enough to only have one)... we have no positional information. Things like dark hair get flattend out, lips move back (because they are darker than the skin) and teeth protrude outward due to thier high pixel intensity. We could manipulate the heightmap manually or use tools to manipulate the 2d image but these effects are quite noticably fabricated edits. I even implemented blending algorithms (like photoshop) that helped us at least get something that looked half decent, but still wasn''t pretty. Well I reviewed this problem and it is well defined in stereography...I tried some algorithms, specifically from Birchfield and Tomasi to help us understand producing depth maps from stereo images, but these results are far from stellar. Structured light looks promising but not likely that a average joe user will use this technique to make his own carvings from images. I believe my boss is making a request that is beyond what the starting material is willing to provide. And Im at my wits end in trying to find a solution to the problem. I posted this because I was interested if I had missed any oversights in my conclusion of the problem. If you have any comments please, I''d love to listen!

Share this post


Link to post
Share on other sites
Advertisement
Only from a general picture is IMPOSSIBLE to extract depth informations.
If you have an "albedo" picture (say pure natural color with neutral an uniform light) and then the lightened picture and the position and parameter of each light you can probably (with no little difficulties) extract depth informations.
Stereo images can help you really much, even if this method suffers of several problems.
Structured lights is at the moment the best method to extract depth informations (it''s the "3D Scanning" most used mode).
If you can scan your subjects with a structured light source (ie. laser beam) you can have depth maps, the algorithm is quite simple, based on triangulation trig formulas.
However, from an unique picture you can hope to automatically generate a "starting point" depth map and then refine it. But, as you noticed, you''ll have problems like hair and teeth. Definately i think this is quite an impossible work.

Marco.

Share this post


Link to post
Share on other sites
Believe it or not, it''s a very fundamental physics problem standing in your way. You have 2D data; in the case of a photo, this is a flat projection of an aperature on a 3D+temporal universe.

A 2D image is essentially a summary of a 3D dataset. And, it is a provable fact that there is no unique pair of numbers which sum to any given third.

What does this have to do with your problem? Without more information than your lone 2D image, you cannot even extrapolate an accurate 2.5D voxelmap, because all you have is a finite-detail array of color values.

Essentially, yes; you accepted an impossible problem. I hope you aren''t contracted to solving this.

Share this post


Link to post
Share on other sites
The easy way out is to tell your boss you consulted a panel of experts (the GameDev.net forum members) and they told you it was impossible.

If you still want to do the impossible, then you will have to settle for an approximation model. Consider reducing the problem to just rendering a person''s head. You will need a generic manipulable 3D model of a head. You would probably have to manually adjust the model''s initial orientation to match the photo; then apply some skin tones and set up some light sources. Then have the program make iterative fine adjustments of the model to improve the overall match to the photo. The iterations will reach a point of diminishing returns and that is the time to stop - your model is then as good as it will get. Then you can use the model to drive the carving system.

If you spend several months on it, you might get something that works halfway decent.

Share this post


Link to post
Share on other sites
If you have several pictures however... you can use a technique called "photogrammetry" to make textured 3D models from multiple photos. As for purely from a 2D image.. you can use certain packages to "paint" depth onto 2D images.. but that would only give you a single 3D view (like a nailboard). If you just need embossed stuff.. then this will do I guess.

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!