Approximation of Normals in Screen Space

Started by
11 comments, last by LouisCastricato 11 years, 1 month ago

I think I wanna stay with the static approach, since I wouldn't have much of a science paper if I didn't (Mainly because, I wanna do something new, and extremely challenging)

http://www.cse.ust.hk/~pang/papers/ID0225.pdf

It's a guided approach but it gives a baseline of this style of work. Honestly though multiple view points or multiple lighting setups can uniquely determine the solution.

Graphics Programmer - Ready At Dawn Studios
Advertisement

Um, I can see from where the video suggestion comes. One of the methods mentioned requires multiple images, and an user who's holding a phone is not going to have a steady aim (unlike e.g. a tripod), especially not when pressing the button. So instead of taking a photo, you could take a few consecutive frames of video and use them for the algorithm. The user would still probably think it's just like taking a pic since the amount of time is very short =P

Alternatively you could take e.g. a pic when the user presses the button and a pic when the user releases it. Both pics would be from different viewpoints and could achieve the same result.

Don't pay much attention to "the hedgehog" in my nick, it's just because "Sik" was already taken =/ By the way, Sik is pronounced like seek, not like sick.


I think I wanna stay with the static approach, since I wouldn't have much of a science paper if I didn't (Mainly because, I wanna do something new, and extremely challenging)



http://www.cse.ust.hk/~pang/papers/ID0225.pdf

It's a guided approach but it gives a baseline of this style of work. Honestly though multiple view points or multiple lighting setups can uniquely determine the solution.

Thanks! That really helped. Based off my current system, I can do the method described in the paper without user input (besides the picture)

This topic is closed to new replies.

Advertisement