Jump to content
  • Advertisement

quasty

Member
  • Content Count

    118
  • Joined

  • Last visited

Community Reputation

114 Neutral

About quasty

  • Rank
    Member
  1. Thank you very much! I've found additionally these explanations addressing morph & pose animation as two types of vertex animation. And pose animation being a more advanced version of morph animation in supporting multiple poses. http://www.ogre3d.org/docs/manual/manual_70.html
  2. quasty

    diffuse + Fresnel term

    Thank you. I've done some more testing since I don't have properly-valued HDR data and so I'm trying to "fudge this"; but also looking for a way to make this somehow stable in terms how the specular component can be described best as a solid specular reflection model on a human face. Currently the specular Blinn/Phong component is parametrized by two maps (specular and shininess maps) over the human face. Resulting in more or less soft specular reflections as it is desired, but without accounting the specular reflections at gazing angles. So this Fresnel term by Schlick describes the spatially varying "potential" of a specular reflection, doesn't it? I tried in shifting the - by the parameters of these two maps - calculated specular intensity (Is) around based on the Fresnel term (Fr) - which never really worked. I just tried using something like this float Fr = fresnel(...); float specfactor = max( Fr, SpecularMap.r ); //instead of //float specfactor = SpecularMap.r; Is = specular( N, L, V, specfactor, specexponent ); and calulating the specular intensity based on that. A high potential in the Fresnel term (Fr) will increase the specular level and rule out the specular level factor from the specular-map - because apart from some areas (nose, lips, forehead) the specular factor is generally quite low. It not quite right yet, but I get specular reflection at gazing angles as well at orthogonal surfaces (if it needs be), and only at gazing angles if there is a light source present in the appropriate way it is reflected. But I was wondering if its "right" or maybe to be more prudent to do it differently? Is there maybe something physically more plausible, more correct which describes this type of reflection model (on a human face)? thank you very much
  3. Hi, In many papers about facial rendering these lucky guys used some pretty expensive 3D scanners to create high resolution meshes with millions of polygons and details down to the mesostructure of the skin. Does anyone of you know if at least one of these very detailed meshes of real people had been made public for normal people to play around with it? Not to use it commercially in any way, but only to work, test and play with it? only one? :) thank you very much :)
  4. Thank you very much! I suspected the rho_s to be some sort of specular map factor, but - when thinking about it - would such a factor not rule out any chance of safely integrating this to 1? The terms is from this paper section 5.3 if it clears some thing up http://graphics.ucsd.edu/~henrik/papers/skin-analysis/skin-analysis.pdf thanks alot!
  5. Hi, I've found a modification (?) of the original Blinn/Phong specular model looking this way: I was wondering about that - the cosine part which shininess n is clear. rho_s is named a "scaling coefficient" which seems to me would be controlled by a specular map, wouldn't it? But the (n+2)/(2Pi) is named "energy normalization": "so that the cosine lobe always integrates to one" - which I don't understand. Does anyone knows what this could mean, it is used in offline rendering and I was wondering if one could somehow benefit from this in realtime rendering? thank you very much!
  6. quasty

    paint into NormalMap

    Hi, I read frequently in the last months about artists painting *into* normal map, mostly in terms of fine scale details. I was wondering - I'm building human face models ATM and was looking for ways to add skin structure to it. And I would love the idea of simply adding more fine structure details to tangent space normal maps. Does anyone know how to do this, helpful tutorials or tools for this? thank you!
  7. Hi, I'm looking into ways to do some facial animation in my application, but the animation is more or less the easy part, cause I can work with a FacialGeneratorTool which allows me to simply generate one facial expression by adjusting a few buttons :) I'm looking for ways to use these animations on my mesh in real time. But there are some things I'm unsure about: I can capture multiple poses in 3Dmax, but will later on only the poses be used or the animation creation of two poses? Does one export just the poses (A, B, C for example) and when using this mesh in an engine with pose animation support, dynamically decide: "show me an animation from A to B, or from B to C, or from A to C" or won't it work dynamically and I need to export a specific animation from one pose to another (show me animation 'a_to_b') and for every pose there is? What is usually done in real-time, and what isn't? And on top of that - if it allows dynamic blending between different poses, does on a blend maybe also work in the middle of a transition - for example while showing an animation from A to B, sudden blend from a point somewhere between A and B to pose C? And does morph target, vertex, and pose-animation in generally mean always the same? And the alternative would be skeletal animation? I've heard from the former three often, but it always seems that it is more or less the same. Are these animations usually done one the CPU and the GPU gets only the transformed vertexes? Or need the shader programs be somehow adjusted to work on a animated mesh for simple Blinn/Normalmapping shaders? The mesh format I'm using supports animations, but I'm unsure on how to use them until I understand what can be done. :( If anyone can help me with some little insights - that would be really great! Thank you!
  8. quasty

    GLSL: world-space normals

    Thank you, I forgot to mention I use Ogre which I found out should make this easier, although I can not execute OpenGL commands. But there seems to be a auto-updated uniform constant, which I can pass to the shader. It's a 4x4 matrix called inverse_view_matrix, when using this to calculate: mat4 toWorldSpace = gl_ModelViewMatrix * inverse_view_matrix; vec4 N_ws.xyz = gl_Normal; N_ws.w = 0.; N_ws = N_ws * toWorldSpace; N_ws should be a normal in (absolute) world-space, shouldn't it? If I assign this normal as a color value and rotate the object, the changing normals on the object should get visible? But I'm afraid this isn't happening - the color values get a little darker as in object space, but they do not change while rotating. I except the constant does credit to its name, but were could be the error apart from that? thank you very much.
  9. quasty

    diffuse + Fresnel term

    Thank you. :) But I'm wondering - the quotient R0 is pretty small for most values, current R0 = 0.036 approximately while using n = 1.47. And when using the Fresnel term: F = R0 + (1-R0)( 1 - N.V)^5 is can almost be reduced to F = (1-N.V)^5 cause R0 is so small. And this will basically highlight the silhouettes on the mesh. But when using this factor to blend between specular and diffuse no specular parts will be visible on any part in direct view (where N.V = 1) - from what I tested the fresnel value on these points is always ~0. Is this correct? It would mean no specular values could be displayed on a orthogonal surface. And additionally a high fresnel value (at the silhouettes) doesn't necessary mean there is something specular to blend into, does it? Since the term seems to be independent from the light source. One would blend into black, if there isn't something specular (or not enough) instead to a possibly colored diffuse value. I was wondering about the results and if I've grasped it correctly? Does R0 need to be that small? And how about these specular blend issues when using basically (1-N.V)^5 thank you very much! :)
  10. Hi, I've been using 9 sine and 9 cosine calculations per fragment. That already pretty much killed my fps rate on my NVIDIA 6800. I thought about - since they're more or less the same - precalculating them. This wouldn't be as exact but I believe sufficient for my needs, if I would calculate about 80-140 sine and cosine positions and pass them to the shader via an uniform array. One would only need to convert radians to array elements. I was wondering if someone has tried this? Does it make sense to precalculate so many values? Or will this counteract any speedup in calculation time? Or is there maybe a better solution to speedup sine/cosine calculations, maybe some vendor specific extensions which use lookup tables? thanks
  11. quasty

    diffuse + Fresnel term

    I've found the solution to calculate R0 ( ((1-n)/(1+n))^2 ) but I'm wondering what is correct, cause the way to calculate Schlicks Fresnel approximation is describes in an nvidia tech report as F = R0 + (1-R0)( 1 - N.L)^5 but in the above paper (L5.pdf) it is quoted as F = R0 + (1-R0)( 1 - N.V)^5 Does anyone know for sure which is correct? thanks
  12. quasty

    GLSL: world-space normals

    Not taken as hijacking @anonymous :) But I'm afraid I can not help you. Quote:If you want to bring your normals into world space you will need to seperate your Model and View matricies on the CPU and pass in the model matrix as a seperate constant. Thank you! I've been searching for this but weren't much successful, so I have no idea on how to do this - get a separate ModelMatrix separated from the ModelView Matrix? Thanks alot!
  13. quasty

    diffuse + Fresnel term

    Thank you both. :) I've began reading on the Cook and Torrance paper (but still having some troubles with it) and just finished the L5.pdf which was already very useful. One thing is still not clear to me. Is Fresnel always both: the evaluation of the amount of light reflected of the surface (and furthermore the one entering the material) and the way the ray is refracted? From my current perception these are two separate things, aren't they? There was the Fresnel formula by Schlick F = R_0 + (1 - R_0 )*(1 - v*n)^5 What puzzles me: R_0 is not the refraction index, isn't it? Or this seems to be missing then? The more complex formula two slides earlier has it in it: F = 1/2 * (g-c)^2/(g+c)^2 * ( ... ) g = sqrt(n^2 +c^2 -1) c = v . h I guess n is the refraction index and h the half vector as indroduced by Blinn? I'm asking this because mainly I want to know how much light is reflected in specular terms at the material "skin" at a certain point under a certain angle. But second I do use multiple texture layers and some parallax vector entering the skin and calculates offset coordinates for a sub-surface-texture. It would be nice if this could be done with a correct refracted ray. I'm still missing something here. Any insights on how to use these terms correctly? Thank you very much!
  14. Hi, i'm experimenting with MRTs and I want to render the world space normals of an object to a texture. The problem is: the normals arrive in the shader in object space. But I couldn't find an matrix with simply transforms from object to worldspace, only to projection/eye. I tested almost every predefined matrix that there is and wrote the transformed normals as color-output, either the normals were sill in object space (one could rotate the object and the color values of the normals didn't change), or it was some sort of projection space or so which only looked wrong. Do such a matrix exists or does it need to be assembled somehow? I expected it to be a more or less simple, but couldn't find an answer for this. thank you
  15. Thanks. I've tested some simple SSS techniques and this effect which covers the translucency part could work for my scenario. And I've found the paper by now and a bit different here (chapter subsurface scattering using depth maps). The concept is the same - render the scene from the lightsource and substract the depth values when using back-to-front and front-to-back rendering for both passes. The result should be the distance light has to travel through an object, when these maps are blur I believe it should give a nive effect. But what I don't understand how is a distance value calculated for every fragment? In the end I need a distance value for every point of the object. This would mean that the vertices whould need to be transformed to their position inside the texture space (since when rendering a normal "image" from the light sources position I would only get values vor the visible parts)? But shouldn't that mess up the depth values when rendering b2f/f2b? But there is now mention of rendering to texture space? Any ideas/help/suggestion on how to implement this effect - would be great! thanks alot!
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!