Hi, I have some questions I hope to get an awnser to.
What types of maps does a typical model need these days to look great? For instance, an High Detail Animal such as a Lion? What about a building (wood or stone)?
Does tesselation, Fur or Hair require any maps? Nvidia had a fur demo with its GTX 6xx series, I guess this technology requires top of the line graphics cards. Is there any point in realisticly having Fur and Hair in a game with tens of characters in the same scene? Perhaps other less costly fur and hair techniques are available.
there is a bump-map, but this is often stored in the normal map.
one way of simulating fur (more traditionally) is via textures.
for example, the model would have its basic fur-colored skin layer, with geometry sticking off which would have an alpha-blended fur texture.
for example, these could be pyramid-shaped (or other patterns, like a "waffle" pattern), but only really the bottom part is used.
this then causes the model to have a "fuzzy" layer over its surface.
using a map of some sort and a shader is also a possibility I guess, but I don't really know of the specifics.
I would guess that the model could have a secondary "outer shell" for the fur layer (basically a slightly "inflated" version of the underlying model geometry), then probably at each point use the eye/point vector and normal to pick out a spot in the fur texture.
say, for example:
pix=texture(furTexture, vec2(s, t));
or something like this...
dunno how well it would work, just made this part up.
the basic idea though is that the bottom of the texture would be solid fur, whereas the top is wispy hairs, and if the camera is more seeing it from the side it sees the wispy hairs and more from above and it sees the solid fur.
possibly, this could be done without needing a secondary outer shell, but the problem is that when seen from the side, geometry would still cut off in a hard edge, whereas a shell could simulate a smooth fuzzy edge.
I guess another trick could be the fur shader simply translating the vertex slightly (in the vertex shader) avoiding the need to have a separate layer in the model.
say, something like:
normal = vec3(NormalMatrix * Normal);
point0 = vec3(ModelView * Vertex);
org = vec3(ModelView * Origin);
dir = normalize(point0 - org);
point = point0 + dir * (Offset / dot(dir, normal));
point = point0 + dir * Offset;
where the idea is that the point is projected a certain offset from the origin along the normal.
alternatively, we could skip the normal, in this case simply projecting the point a certain distance away from the origin (similar to simply upscaling that part of the model).
this should be fairly reasonable on most commonly available hardware.
none of this is tested though...
Edited by cr88192, 22 April 2013 - 01:34 PM.