Jump to content

  • Log In with Google      Sign In   
  • Create Account

cowsarenotevil

Member Since 08 Nov 2002
Offline Last Active Today, 04:14 AM

#5141155 skin shading specular problem

Posted by cowsarenotevil on 21 March 2014 - 09:22 PM

I actually think the first one is the closest to correct; it's definitely a bit too shiny, but the alternative, of not being shiny at all, is much less accurate. The bottom one, in fact, looks much more like a very porous stone than skin (ignoring also the fact that the color is also pretty weird) The specular power and intensity should be modulated along the surface, as some parts of a real face are much more shiny/oily than others.

 

The other thing that seems way off is the scale for the subsurface scattering. The light seems to be penetrating in far, far too deeply




#5098161 lit sphere shader

Posted by cowsarenotevil on 01 October 2013 - 05:38 PM

Yeah, like about only 60000 hits.

 

Looks like using the view normal to lookup into a spherical texture.

 

Yeah, I'm pretty sure it's ultimately identical to any other kind of environment mapping (cube mapping, sphere mapping, etc.) in that it's ultimately just a function that maps angle to color. The only advantage I see is that it represents the function in a fairly intuitive way (and it's easy to capture from paintings, photographs, etc. of a spherical object) and that it stores the most resolution for angles that point toward the camera. The disadvantage is that it's actually only half of an environment map.




#5087940 Ambient Occlusion from Depth Map?

Posted by cowsarenotevil on 21 August 2013 - 05:05 PM

Thank you both those are pretty good answers. Do you think you could fine one though for SSAO that just gives a simple equation for getting occlusion from an image? I made a program that loads a depth map image and runs an algorythm on each pixel. What should the equation per pixel be?

 

The reason there's no simple (at least, simpler than what's already been posted) equation is that the "equation per pixel" needs to, at a minimum, take into account a bunch of neighboring pixels as it needs to know not just the absolute depth but the depth relative to some localized area.

 

There's a perfectly good reason for this: it's called ambient occlusion precisely because you need to figure out how much light is prevented from reaching the target pixel (that is, how much is occluded); there's simply no way to know what light will be occluded unless you have some information about the neighboring geometry, as it's the neighboring geometry that's actually doing the occlusion.

 

Any additional complexity you perceive in the methods posted is probably a result of one or two things:

 

a) naively sampling all neighboring pixels at a wide enough radius is too slow for real-time performance, so the algorithms need some way to determine which pixels to sample to get a result that looks appealing. Usually this means picking some fairly arbitrary sample kernel and then changing it for each pixel so that any sampling error appears as high frequency noise.

 

b) to get something that even sort of looks like ambient occlusion, it's extremely valuable to think of it in terms of normals as well as just depth information. It takes a bit of additional calculation (derivatives) to reconstruct normals from just a depth map.




#5083445 Make games, not engines (from a programmer viewpoint)

Posted by cowsarenotevil on 05 August 2013 - 10:47 PM

I'm pretty sure that you're taking sort of the wrong message from that article: it's not saying (necessarily) use someone else's engine, but rather, make sure any work you put into your own "engine" is directly helping the game itself.

 

I think it's basically just an extension of the you aren't gonna need it philosophy: if you spend the time making your base code generalizable enough that you can package it as an "engine," you've almost certainly implemented things that your game doesn't actually make use of, and worse, you've done it prematurely, without any evidence that those features would be useful for anyone else's game, either.




#5078955 "Standard" Resolution Performance

Posted by cowsarenotevil on 19 July 2013 - 10:32 AM

The best rule of thumb I can think of here would be to prefer outputting at the monitors maximum supported resolution (which should be it's native resolution).

 

Even that's not always perfect, though. I've run into quite a few LCD projectors, for instance, that "support" higher resolutions than they can actually display, and actually seem to downsample the input signal. Worse yet, the downsampling was very crude, but just "good" enough that it was obvious that it was supposed to be a "feature."




#5078291 Specular light color (float vs float3)

Posted by cowsarenotevil on 16 July 2013 - 04:36 PM

- for non-metals, specular highlights are 99% of the time 'white'

- for metals, the specular highlights are mostly the color the material itself

 

I'd change these "rules" to

 

-for non-metals, the specular highlights are the color of the light

-for metals, the specular highlights are the color of the light multiplied by the color of the material

 

The truth is that even this corrected version isn't based on any hard theory; it's just that, in practice, specular highlights on non-metals (e.g. plastic, skin, wood) are generally actually caused by a layer of "clear" material on top of the material, be it oil, varnish, etc.

 

For instance, if you have a shiny ceramic object with gold leaf/trim, there'll be a difference depending on whether the gold is on top of the varnish; if the gold is on top, the gold parts will only have gold-tinted reflections, whereas if the varnish is on top, you'll see something like the sum of the gold-tinted reflections and the un-tinted reflections (which are also present on the ceramic part).




#5077487 Global illumination techniques

Posted by cowsarenotevil on 13 July 2013 - 09:21 PM

 

Hmm, you're right. It looks like they're using cascaded shadow maps for both the static and dynamic geometry, which is interesting. I assume they bake only the indirect lighting and then just add in the direct lighting on the fly. If nothing else, it's probably easier to implement than storing the contribution of direct light onto static geometry.

 

Guys, I understand the part with shadows. It's not interesting if they are using static shadow maps for static level geometry. I don't think they just bake the indirect lighting and that's it. The actors and other objects moving through the level receive indirect lighting as well. I have a feeling they have some sort of lightmap on static levels and also have some "fill lights" placed here and there simulate bounced light and to illuminate dynamic objects, that move around.

 

 

You're right that I got sidetracked.

 

I think what you're suggesting ("fill lights") probably most closely resembles "virtual point lights" which is something that is sometimes used in radiosity/deferred lighting systems. I've toyed with it a bit and found that it is probably best suited to non-realtime/"interactive" rendering, as it takes a fairly large number of VPLs as well as some kind of shadowing/occlusion to make it work well. Like I said, I've only played with this a bit, so there might well be some interesting optimizations/approximations that I'm not aware of.

That said,, MJP suggested baking light probes, which is a fairly similar idea. I'm not an expert on the subject, but I'll fill in the details to the best of my ability. I think that the three most common ways of achieving this are using probes that store just color information (with no directional information), spherical harmonic lights, or irradiance maps.

 

They all typically involve interpolating between the probes based on the position of the dynamic actor that is being rendered.

 

In the first case (color only), we have stored the amount/color of light that has reached a given probe, so that the probes make something that resembles a point cloud of colors, or a (very coarse) 3D texture. When we want to light a dynamic object, we just interpolate between the colors (based on position) and use that color as the indirect term.

 

In the second and third cases (irradiance or spherical harmonic), the probes also define a way of looking up the light color based on the normal of the dynamic object we are rendering; aside from that, the idea is the same: we interpolate between the probes based on the dynamic object's position, and then do the look-up, only with the normal as input.
 

These links might help you understand how to compute the probes for those cases. They're not explicitly geared toward pre-computing multiple probes (but rather computing irradiance for a single point efficiently) but they might help to point you in the right direction: http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter10.html and http://codeflow.org/entries/2011/apr/18/advanced-webgl-part-3-irradiance-environment-map/

 

I invite anyone to correct my imperfect understanding of the subject.




#5075371 Global illumination techniques

Posted by cowsarenotevil on 04 July 2013 - 11:10 PM

Hmm, you're right. It looks like they're using cascaded shadow maps for both the static and dynamic geometry, which is interesting. I assume they bake only the indirect lighting and then just add in the direct lighting on the fly. If nothing else, it's probably easier to implement than storing the contribution of direct light onto static geometry.




#5075367 Global illumination techniques

Posted by cowsarenotevil on 04 July 2013 - 09:28 PM

I like the way the shadows in Naughty Dog's "The Last of Us" are unified.

That is, the actors' and environment's shadows merge together like they're supposed to.

Most games just place lightmaps on the environment and then dynamic shadows from actors blend over them - that is, they further darken the lightmaps.

This is unrealistic, since both these shadow representations come from the same light source and should merge together instead of darkening one another.

 

Based on the "low-resolution" appearance of the environment shadows in this game, I can assert that they're not lightmaps but actual static shadow maps that are rendered in most likely the same pass as the actors' shadows so that they merge together.

There are most likely other lighting contributions involved in the sophisticated visuals for this game, but static shadow maps for the environment are participating.

 

I agree that this is the right effect to aim for, but I disagree that most games do it the "wrong" way -- the Unreal Engine 3, for instance, separates the "direct" component of dominant lights from the indirect component for exactly this reason (among others, such as doing different filtering on the sharp edges of direct lights* versus the smoother gradients of the indirect component); this makes it relatively efficient to combine lightmaps with dynamic shadow maps, as shadow maps only block out the direct component and leave the indirect light unchanged (which is of course an approximation itself, but one that generally is acceptable).

 

*in fact, this may be an alternative explanation for the "low-resolution" appearance of the shadows you're seeing, meaning that it might still be lightmaps rather than shadow maps - it seems weird to recalculate even just the direct light shadows for static lights/geometry every frame for no reason




#5073723 Normal artifacting exposes underlying topology

Posted by cowsarenotevil on 28 June 2013 - 05:38 PM

Like others have said, this is to be expected. If you find that it's a real problem in practice, though, you might be able to work avoid the "problem" by interpolating your normals non-linearly.

 


I'm surprised that you are not seeing at least some improvement by normalizing the normals in the fragment processor.

 

I disagree.

 

Here's (I hope) a useful analogy: imagine that, instead of "topology," you are working with only a low-resolution normal map, where each pixel corresponds to a "vertex." The old, fixed-function Gouraud shading is analogous to computing the lighting at the resolution of the original normal map, then scaling the whole image up to the target size with linear interpolation. Per-pixel (Phong) shading would involve scaling the normal map to the target size and then computing the lighting.

 

Note that if all we're doing is rendering the normal map (ignoring lighting), these two processes won't do anything different, so there's no advantage to doing it "per-pixel". The only way you'll get a result that doesn't exhibit the "topology" (which is really just like the pixelation of a scaled up image) is to use an interpolation algorithm that doesn't exhibit artefacts that you find unpleasant.

Ultimately, you're still interpolating, generating data where there is none; you just have to find a way to generate data that you like.




#5071776 "Soft" shader, or, how do I get this skin-lighting effect?

Posted by cowsarenotevil on 21 June 2013 - 08:10 AM

It looks to me that those screenshots mostly show off similar, but not necessarily identical, effects. In particular, the third ones look like they achieve the "softness" mostly in a way that isn't viewing-angle sensitive (they both also seem to show specular lighting with a low specular power, but it's not necessarily anything special). The second one looks like it has a lot of ambient/environment light to give it the softness.

 

The first and fourth ones seem to show off lighting that gets brighter as the incident angle is more perpendicular to the camera; as mentioned there are a lot of ways to achieve this; the NVIDIA Shader Library has some "velvet" and "fuzz" materials that might be of use.

 

If you just want to play with some common algorithms, Blender supports a few shading models with its internal renderer; you might look at Minnaert (with a <1 "darkness" value) and Fresnel for viewing-angle sensitive lighting, and Oren-Nayar for a "rougher" (relative to Lambert), but still purely diffuse material.

 

Finally, if you specifically want the "skin transparency layering mimicking thing" the keyword to look for is "subsurface scattering"; to my knowledge the most common way to fake this in real time is to sum layers made by blurring the lighting in UV space.

 

EDIT: Wrote "blending modes" where I didn't mean to write "blending modes" at all.

EDIT: Wrote <0 when I meant <1




#5070196 How is Ambient Occlusion Calculated?

Posted by cowsarenotevil on 16 June 2013 - 11:06 AM

Hey, this article could be very interesting for you: http://theorangeduck.com/page/pure-depth-ssao

 

Apparently they compute the normals from the depth map during the ssao pass. Ps: It was the first hit on google for "ssao depth only".

 

For the record, a lot of implementations don't even bother reconstructing the normals; they only measure the difference in depth between the occluder and the occludee.




#4977483 Valve introduce greenlight fee - is $100 too much?

Posted by cowsarenotevil on 06 September 2012 - 09:55 PM

I'm seriously considering Steam Greenlight for a game I'm working on. I think it would be nice if the fee could get refunded if the game reached some minimum threshold of support (ideally less strict than actual acceptance) but it's not going to make my decision one way or another.

Why do they need to charge? greatness in a pool of garbage will rise to the top by votes from users. That's how iphone/android work, and that's how kickstarter works.


Well, I think "works" might be too strong of a word. Blatantly stolen and nonfunctional apps can be profitable on both iphone and android and occasionally supersede apps that are actually good. I will say it's a sign that Steam may need to tweak their algorithms if a few crappy entries being posted can ruin their service in the first few days, though.


#4894948 Why does OpenGL not show some of my faces?

Posted by cowsarenotevil on 18 December 2011 - 12:05 AM


It's possible that you're not being consistent with whether faces are drawn clockwise or counterclockwise, meaning that some of the faces are "backwards," that is, facing away from the camera. If you have backface culling enabled or any of various lighting setups, those faces will not be rendered or will appear black.


I commented out all of my lighting code and nothing changed. I doubt it has to do with that.


Did you try doing glDisable(GL_CULL_FACE) right before rendering the faces as well, just to be sure?

I should also add that when I keep a cube in a static position then the order of events in which I render the different faces of the cube matters. For example, if I render the vertices for the bottom face first then it will eventually be overlayed by the top face once I render it.


I see you have glEnable(GL_DEPTH_TEST) in your init code, so my guess is that either your depth buffer is not setup correctly (e.g. zNear and zFar are not set correctly, among other things) or depth testing is being disabled before you actually render the faces somehow.


#4887758 Why is it that game designers should not have emotions for their ideas?

Posted by cowsarenotevil on 25 November 2011 - 06:23 PM

Without that ruthlessness -- that willingness to kill ANY of your ideas in the service of the greater story, you will be self-indulgent and great writing is never self-indulgent.


Self-indulgent writing may or may not ever be great, but it is often profitable.




PARTNERS