Jump to content
  • Advertisement
Sign in to follow this  
torrentise

Generating STL files from images and vice versa

This topic is 3728 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello all, I am new to graphics programming. I am about to work on a project that requires producing STL files from images (dental images) and vice versa. Intermediate images will be used for image processing and recognition. STL files will be input to a custom CAD system. I am wondering about three things: 1. how to generate STL files in principle? 2. how to generate one from and image? 3. how to convert such a file to an image? I really would appreciate any help. :D Are BSP Trees are involved in anyway? I have just read about the algorithm, I am not even sure if it is a naive or a crude question! Thank you in advance.

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by torrentise
Hello all,

I am new to graphics programming. I am about to work on a project that requires producing STL files from images (dental images) and vice versa. Intermediate images will be used for image processing and recognition. STL files will be input to a custom CAD system.

I am wondering about three things:
1. how to generate STL files in principle?
2. how to generate one from and image?
3. how to convert such a file to an image?

I really would appreciate any help. :D

Are BSP Trees are involved in anyway? I have just read about the algorithm, I am not even sure if it is a naive or a crude question!

Thank you in advance.
First up, can you please delete your double post.

According to wikipedia the STL file format is just a simple triangular-mesh format. The mesh format is irrelevant to your main problem (converting from images to meshes or vica versa) - the techniques required for the main problem will be the same if you use STL, 3DS, COLLADA (etc) files to store the mesh.
I suggest structuring your work with this separation in mind - you will need code that can load/store a mesh from/to a STL file, but this code should be independent of your main problem.

Here's some high level answers, but you'll probably need lots more detail than this I'm guessing ;)

1) If you do some basic OpenGL or Direct3D tutorials, you will learn how to represent a triangular mesh in memory. Once you've learned this it should be trivial to write out the triangles in the STL file format. It should also be pretty easy to write a parser for this format to load the triangles back into memory from disc.

2) I need some more information about your problem to suggest anything for this one. Are the source images from X-Rays or something? Is that basically an image of density (white = dense matter, black = soft tissue)?
Do you just have one source image (e.g. side on), or a range of source images from different angles?

3) Again this will depend on the details of (2). To produce a flat image from your triangle mesh, you could clear the screen black and render all of your triangles with an additive blending mode. This means every triangle will add a bit of lightness to the image, so successive layers of triangles will eventually add up to pure white. If you want a pure side-on image, you can use an orthogonal projection for the camera.

Share this post


Link to post
Share on other sites
Hello, Hodgman

Thanks for your helpful reply. I have tried to delete the post but I got a message saying that it can now only be deleted by moderators or administrators because the time I have had to delete the post has expired.

The image source is a telecentric dental camera. It is different from usual intra-oral cameras, one exposure allows a 3D model of teeth of interest to be developed. As far as I know (I am not yet working on the project, but I am rather preparing to) it is not an image of density. I hope this clears things a bit.

I noticed that you emphasised that conversion is dependent on the sort of images I will be working with. Could you please point me to algorithms I can make use of or learn about? How can I further tackle the methods you suggested in (3).

The software will run in Windows and UNIX-like environments. I know (not solid knowledge though :D) that OpenGL is not by default tailored for Windows platforms and Direcr3D outperforms it. I would prefer to work with OpenGL so work will be done with one technology to make things, especially maintenance, easier.

Is there a workaround for this? Is there an optimised OpenGL engine for Windows?

I really appreciate your help. I hope I am not bothering you with many details. Thank you so much.

Share this post


Link to post
Share on other sites
Quote:
Original post by torrentise
The image source is a telecentric dental camera. It is different from usual intra-oral cameras, one exposure allows a 3D model of teeth of interest to be developed. As far as I know (I am not yet working on the project, but I am rather preparing to) it is not an image of density. I hope this clears things a bit.
That the camera uses telecentric lenses merely guarantees that there is no distortion of scale with distance. You are going to need to obtain more information on the camera and its output. A single image does not provide enough data to reconstruct a 3-dimensional model, so you either need multiple images from different angles/distances, or something fancy like a holographic exposure.
Quote:
The software will run in Windows and UNIX-like environments. I know (not solid knowledge though :D) that OpenGL is not by default tailored for Windows platforms and Direcr3D outperforms it. I would prefer to work with OpenGL so work will be done with one technology to make things, especially maintenance, easier. Is there a workaround for this? Is there an optimised OpenGL engine for Windows?
OpenGL works fine under Windows, with comparable performance to D3D.

Share this post


Link to post
Share on other sites
Hello swiftcoder,

Thanks for help. I appreciate it. I will get more information about the input device. All that I am sure of is that it requires only one exposure so that a 3D model may be reconstructed and it is telecentric.

Would you please give me some hints on reconstructing a 3D model from a set of images taken from different angles/distances? Algorithms used, concepts or tutorials....

As I mentioned previously, I am new to this sort of stuff. However, I have to get along quite quickly. So, pardon my questioning of each idea/method mentioned in the thread. :D

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!