Jump to content

  • Log In with Google      Sign In   
  • Create Account


How to ray-trace individual hair strands


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
1 reply to this topic

#1 LargeJ   Members   -  Reputation: 126

Like
0Likes
Like

Posted 26 September 2012 - 01:57 PM

Hello all,

I want to implement a ray tracer that models hair fibers as described by Marschner et al: "Light scattering from Human Hair Fibers (2003)". From reading several other papers I noticed that hair can be rendered explicitly and implicitly. Explicit rendering requires every hair strand to be rendered separately, but because hair fibers are very thin compared to the size of a pixel, there will likely be aliasing problems. I read a lot about using volume densities instead, but I do not entirely understand this idea.

I was wondering what techniques are generally used to ray-trace a hair fiber? My idea is that hair segments (curves) can be projected on the image plane. This way, you exactly know which pixels are affected and then apply pixel blending to render the fibers that affect the pixel. However, I have not been able to find a (scientific) paper explaining the best way to render individual hair strands using ray tracing. It looks like many people choose to treat the curves as thin cylinders and use oversampling to accomodate for the aliasing problems.

So, does anyone know how a single hair fiber is ray traced nowadays? The rendering should be physically accurate, so speed is not an issue at this point.

Sponsor:

#2 luca-deltodesco   Members   -  Reputation: 637

Like
0Likes
Like

Posted 26 September 2012 - 02:09 PM

(This is my understanding)

Volume density approach means that you're not necessarigly looking for intersections with rays against individual hair strands, but against an adaptive iso-surface, you terminate when the distance from the ray point to the hair is within a given tolerance, and that tolerance can be adapted based on distance from the camera so that aliasing, and need for super-sampling is decreased. (Aka as distance increases from camera, you can increase the tolerance so that the distance from ray to the hair strand remains roughly constant in screen space)

Edited by luca-deltodesco, 26 September 2012 - 02:10 PM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS