Yep, I was talking about that transparent kind of holograms on fine photo film.
It's fun to make. The theory behind it is intersting too.
Back to parallax bump mapping->
You use an extra texture that contains the height of the point on the surface you want to represent compared to the polygonal surface. You then encode the tangent plane (or any higher degree surface if you want/can/are willing to pay for it) of the surface.
When you render a pixel (of the real geometry), you try to figure to which point of the surface you want to represent it matches by solving a plane/line eq. (on higher order methods, you may have more than one answer). You then use that virtual point's position to figure out the texture coordinates you plan to use.
Holographic Texture Mapping?
So they're just saying to use a different texture depending on the view from which a surface is observed? I can see some clever uses for this (like even better cartoon shaders, in one case) but still it seems a lot of hooah for nothing. The "better lossy compression" thing sounded much nicer to me, given the nature of the Unreal engines - after all, the UT games have a propensity for allowing game servers to host tons of new and fun game content (mutators, models, maps, etc), then bog the clients down with downloading layers upon layers of lossless texture data.
Quote:So they're just saying to use a different texture depending on the view from which a surface is observed?
They alter the texture coordinates used depending on view angle the texture stays the same, there's a paper on it here.
And here is a *long* thread about it with different implementations a possible improvements discussed. A good read for anyone trying to implement it.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement