Jump to content

  • Log In with Google      Sign In   
  • Create Account


#ActualBrother Bob

Posted 22 July 2013 - 12:09 PM

Here's (I hope) a useful analogy: imagine that, instead of "topology," you are working with only a low-resolution normal map, where each pixel corresponds to a "vertex." The old, fixed-function Gouraud shading is analogous to computing the lighting at the resolution of the original normal map, then scaling the whole image up to the target size with linear interpolation. Per-pixel (Phong) shading would involve scaling the normal map to the target size and then computing the lighting.

 

I'm not surprised that you disagree.  If a person were to shrink a normal map down to almost nothing and then resize it back up again, that would introduce such unsightly artifacts as to make the texture completely hideous and unuseable.  I would never consider doing this. There would be no point to using them at all.

 

Why in the world would you consider scaling down a normal map only to scale it back up with linear interpolation, or any interpolation for that matter?  This made me think of an analogy:  You spent the whole weekend polishing your car only to finish it up by slinging mud at the car and rubbing the mud into the paint and gouging it all up.

 

The proper way to generate a normal map is to estimate roughly how many pixels the model will take up on the screen, and then set your 3D modeling program to generate the normal map at whatever value will fill up that much screen space, I would say, to be safe, generate one power-of-2 size bigger just to be safe.  Only shrink it down when you're sure it's ready to go.  Don't try to save space now and then try to up-size it later.  If you end up in this situation then use the 3D modeling software to re-generate the larger size.

 

If you want to compress your normal maps then make sure that you are using a loss-less format or you will damage it.  Visually, it will literally look like you took sandpaper to your finely polished model.

Normals that are compressed to color space in an RGB texture have already been damaged by this process to begin with.  Only floating point textures can fix this completely. 

 

Personally I cringe at the idea of using a normal map that is smaller than the screen area that the model takes up.  They can be mipmapped fine without much corruption if you get the filter settings right.  They cannot be sampled up in size without seriously damaging them.  Unless they are blurred of course.  Low frequency noise can be scaled up without too much apparent aliasing creeping in. 


#10marcClintDion

Posted 22 July 2013 - 03:06 AM

There is far too much arrogance and out right abuse by site moderators, they are teaching other people to behave this way.  The posts I've made will all be shorty removed and replaced with this notice.  Game development is not the only thing being taught here, bad behavior is being taught as well.

#9marcClintDion

Posted 30 June 2013 - 12:59 PM


Here's (I hope) a useful analogy: imagine that, instead of "topology," you are working with only a low-resolution normal map, where each pixel corresponds to a "vertex." The old, fixed-function Gouraud shading is analogous to computing the lighting at the resolution of the original normal map, then scaling the whole image up to the target size with linear interpolation. Per-pixel (Phong) shading would involve scaling the normal map to the target size and then computing the lighting.

 

I'm not surprised that you disagree.  If a person were to shrink a normal map down to almost nothing and then resize it back up again, that would introduce such unsightly artifacts as to make the texture completely hideous and unuseable.  I would never consider doing this. There would be no point to using them at all.

 

Why in the world would you consider scaling down a normal map only to scale it back up with linear interpolation, or any interpolation for that matter?  This made me think of an analogy:  You spent the whole weekend polishing your car only to finish it up by slinging mud at the car and rubbing the mud into the paint and gouging it all up.

 

The proper way to generate a normal map is to estimate roughly how many pixels the model will take up on the screen, and then set your 3D modeling program to generate the normal map at whatever value will fill up that much screen space, I would say, to be safe, generate one power-of-2 size bigger just to be safe.  Only shrink it down when you're sure it's ready to go.  Don't try to save space now and then try to up-size it later.  If you end up in this situation then use the 3D modeling software to re-generate the larger size.

 

If you want to compress your normal maps then make sure that you are using a loss-less format or you will damage it.  Visually, it will literally look like you took sandpaper to your finely polished model.

Normals that are compressed to color space in an RGB texture have already been damaged by this process to begin with.  Only floating point textures can fix this completely. 

 

Personally I cringe at the idea of using a normal map that is smaller than the screen area that the model takes up.  They can be mipmapped fine without much corruption if you get the filter settings right.  They cannot be sampled up in size without seriously damaging them.  Unless they are blurred of course.  Low frequency noise can be scaled up without too much apparent aliasing creeping in. 


#8marcClintDion

Posted 30 June 2013 - 12:54 PM


Here's (I hope) a useful analogy: imagine that, instead of "topology," you are working with only a low-resolution normal map, where each pixel corresponds to a "vertex." The old, fixed-function Gouraud shading is analogous to computing the lighting at the resolution of the original normal map, then scaling the whole image up to the target size with linear interpolation. Per-pixel (Phong) shading would involve scaling the normal map to the target size and then computing the lighting.

 

I'm not surprised that you disagree.  If a person were to shrink a normal map down to almost nothing and then resize it back up again, that would introduce such unsightly artifacts as to make the texture completely hideous and unuseable.  I would never consider doing this. There would be no point to using them at all.

 

Why in the world would you consider scaling down a normal map only to scale it back up with linear interpolation, or any interpolation for that matter?  This made me think of an analogy:  You spent the whole weekend polishing your car only to finish it up by slinging mud at the car and rubbing the mud into the paint and gouging it all up.

 

The proper way to generate a normal map is to estimate roughly how many pixels the model will take up on the screen, and then set your 3D modeling program to generate the normal map at whatever value will fill up that much screen space, I would say, to be safe, generate one power-of-2 size bigger just to be safe.  Only shrink it down when you're sure it's ready to go.  Don't try to save space now and then try to up-size it later.  If you end up in this situation then use the 3D modeling software to re-generate the larger size.

 

If you want to compress your normal maps then make sure that you are using a loss-less format or you will damage it.  Visually, it will literally look like you took sandpaper to your finely polished model.

Normals that are compressed to color space in an RGB texture have already been damaged by this process to begin with.  Only floating point textures can fix this completely. 

 

Personally I cringe at the idea of using a normal map that is smaller than the screen area that the model takes up.  They can be mipmapped fine without much corruption if you get the filter settings right.  They cannot be sampled up in size without seriously damaging them.  Unless they are blurred of course.  Low frequency noise can be scaled up without too much apparent aliasing creeping in. 

 

 

 


You could probably also get something that looks better if you instead sampled the terrain normals in the pixel shader instead of the vertex shader, since then you'd get values interpolated between 4 normals instead of 3.

 

This idea of a more accurate sampling due to an increase of information is enhanced further by math that is automatically done by the GPU.  Smoothing math is being implemented automatically on a per-fragment basis.   Information used in the fragment processor is being curved to approximate a perfectly rounded surface. 

This idea is similar to adding more control points to a bezier curve, the GPU does this to vectors automatically when they are sent to the fragment processor.

This is why I said that I'm surprised that you see no difference when you perform the normalization in the fragment processor.  You should see a noticeable difference when you do this..


#7marcClintDion

Posted 30 June 2013 - 12:28 PM


Here's (I hope) a useful analogy: imagine that, instead of "topology," you are working with only a low-resolution normal map, where each pixel corresponds to a "vertex." The old, fixed-function Gouraud shading is analogous to computing the lighting at the resolution of the original normal map, then scaling the whole image up to the target size with linear interpolation. Per-pixel (Phong) shading would involve scaling the normal map to the target size and then computing the lighting.

 

I'm not surprised that you disagree.  If a person were to shrink a normal map down to almost nothing and then resize it back up again, that would introduce such unsightly artifacts as to make the texture completely hideous and unuseable.  I would never consider doing this. There would be no point to using them at all.

 

Why in the world would you consider scaling down a normal map only to scale it back up with linear interpolation, or any interpolation for that matter?  This made me think of an analogy:  You spent the whole weekend polishing your car only to finish it up by slinging mud at the car and rubbing the mud into the paint and gouging it all up.

 

The proper way to generate a normal map is to estimate roughly how many pixels the model will take up on the screen, and then set your 3D modeling program to generate the normal map at whatever value will fill up that much screen space, I would say, to be safe, generate one power-of-2 size bigger just to be safe.  Only shrink it down when you're sure it's ready to go.  Don't try to save space now and then try to up-size it later.  If you end up in this situation then use the 3D modeling software to re-generate the larger size.

 

If you want to compress your normal maps then make sure that you are using a loss-less format or you will damage it.  Visually, it will literally look like you took sandpaper to your finely polished model.

Normals that are compressed to color space in an RGB texture have already been damaged by this process to begin with.  Only floating point textures can fix this completely. 

 

Personally I cringe at the idea of using a normal map that is smaller than the screen area that the model takes up.  They can be mipmapped fine without much corruption if you get the filter settings right.  They cannot be sampled up in size without seriously damaging them.  Unless they are blurred of course.  Low frequency noise can be scaled up without too much apparent aliasing creeping in. 


#6marcClintDion

Posted 30 June 2013 - 12:24 PM


Here's (I hope) a useful analogy: imagine that, instead of "topology," you are working with only a low-resolution normal map, where each pixel corresponds to a "vertex." The old, fixed-function Gouraud shading is analogous to computing the lighting at the resolution of the original normal map, then scaling the whole image up to the target size with linear interpolation. Per-pixel (Phong) shading would involve scaling the normal map to the target size and then computing the lighting.

 

I'm not surprised that you disagree.  If a person were to shrink a normal map down to almost nothing and then resize it back up again, that would introduce such unsightly artifacts as to make the texture completely hideous and unuseable.  I would never consider doing this. There would be no point to using them at all.

 

Why in the world would you consider scaling down a normal map only to scale it back up with linear interpolation, or any interpolation for that matter?  This made me think of an analogy:  You spent the whole weekend polishing your car only to finish it up by slinging mud at the car and rubbing the mud into the paint and gouging it all up.

 

The proper way to generate a normal map is to estimate roughly how many pixels the model will take up on the screen, and then set your 3D modeling program to generate the normal map at whatever value will fill up that much screen space, I would say, to be safe, generate one power-of-2 size bigger just to be safe.  Only shrink it down when you're sure it's ready to go.  Don't try to save space now and then try to up-size it later.  If you end up in this situation then use the 3D modeling software to re-generate the larger size.

 

If you want to compress your normal maps then make sure that you are using a loss-less format or you will damage it.  Visually, it will literally look like you took sandpaper to your finely polished model.

Normals that are compressed to color space in an RGB texture have already been damaged by this process to begin with.  Only floating point textures can fix this completely. 

 

Personally I cringe at the idea of using a normal map that is smaller than the screen area that the model takes up.  They can be mipmapped fine without much corruption if you get the filter settings right.  They cannot be sampled up in size without seriously damaging them.  Unless they are blurred of course.  Low frequency noise can be scaled up without too much apparent aliasing creeping in.


PARTNERS