Jump to content
  • Advertisement
Sign in to follow this  
feathersanddown

Pregrade thesis suggestions

This topic is 4052 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi to all people, first, sorry by my bad english, isn't my native language. Second, i don't know if i'm in a off-topic forum thread, it could posted at "Graphics Programming and Theory" forum instead I think. Well, how the topic subject say, I'm searching a pregrade thesis subject to my university. I have an idea that I want to develop and consist of (in a general way) a motion capture system. My professor guides is an expert on distributed systems, so we could make something mixed. Now the problem, there is a way to mix them?. Ok... let me be more specific. I know what is 'parallel rendering' and 'render farm' and so. Already there are thesis about those topic, so is not the idea to repeat them. I have been searching in some papers how a basic motion capture system must have to do the job but i'm a little confused, and i don't know if exist some process that can run in a parallel way that it can became implemented in a distributed system. I think that 'distributed system' could be a wan or a multi-core cpu, or not? Just we need a bunch of cpu 'nodes'. Maybe efficiency tests in some algorithms. Motion capture is often used in a real time capture-analisys-rectification-representation environment. Another ask I have is there is a way to interpolate mathematically in the space. I have been read somewhere that is useful for the eficiency and understanding the use of quaternions. My thesis could be named "use of quaternions to describe captured movement" or whatever, leaving outside the distributed part. I plan to capture movement with 2 webcams that are orthogonally placed in a space, one to capture the x,y plane, and other to capture the 'depth' plane using z coordinate (x,z) or (y,z) using triangulation, but i don't find a mathematically foundation to describe that, but i think that i can propose it in my 'propose' chapter. Using stereographics foundation is useful to talk about 2 cameras in the same plane pointing at the same direction and using triangulation to calculate the missing depth information of the object, but it don't help me much when one camera is not pointing to the same direction and is in another place. Finally, is not the idea to programming a huge distributed motion capture system, the code will be owned by my university, so we could talk about 'algorithms' in 'pseudocode'. I'm tired, more specific things in the next chapter (or post ;) ) Thanks

Share this post


Link to post
Share on other sites
Advertisement
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!