Jump to content
  • Advertisement
Sign in to follow this  
Polydone

Downsizing normal maps

This topic is 618 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I was wondering if downsizing a normal map is as simple as downsizing in an image editor, as long as you're working with 2^n sizes? Simply take the average values of every 2*2 pixel? Or are there some special cases where this approach will fail, like close to edges etc?

Share this post


Link to post
Share on other sites
Advertisement
Thanks this is definitely food for thought - I'll keep this in mind if I notice any anomalies.
My reason for asking this is that I'm working with some assets that might have textures that are much more detailed than what I really need so I just want to cut down on the memory requirements and possibly also gain better performance.
I guess what I want to achieve is pretty much the same as mip-mapping, except I won't be using the high-res texture.
If my assumptions are correct then I should also be able to fit more textures in at atlas - that would be sugar on top:)
I did a little googling and found this one - I have to read it later.
http://www.nvidia.com/object/mipmapping_normal_maps.html
Fortunately I don't have to worry about performance since this will not be done runtime. Edited by Polydone

Share this post


Link to post
Share on other sites

The recommended way is to bake the texture to a new, smaller, empty texture.

Baking this way is better than scaling, because it reads the normal from the mesh it can smartly decide what goes where.

It's recommended you use this for all texture adjusting, it preserves 20% more of the texture- on none 2D models- than any scaling formula I have ever used.

 

The down side is no real time resize.

 

You can do this with Blender or Xnormal.

Blender tut:

 , just took a quick look, it seems to be right.

https://docs.blender.org/manual/en/dev/render/blender_render/bake.html The wiki page on baking.

 

 

 

 

In a basic sense, yes you can just resize a normal map like a normal image and it will mostly work. However there's a few things to watch out for:

Like MJP stated here, most problems from just scaling is not noticeable by the human eye. If you only scale once it's fine, however by scaling over and over will accumulate errors to the point where they can be noticed.

 

Baking won't cause errors unless the UV changes dramatically -not moving around, that is harmless and is how atlasing works- or you attempt to make a small image larger.

 

 

A large saving factor that is often overlooked is just converting to .jpg, a .png to jpg can reduce the file size to 1/4 of the original; saving 1/2 on loading time.

Edited by Scouting Ninja

Share this post


Link to post
Share on other sites
I believe that will only work if I have the original high-poly mesh?
In my case my only source for the normals is the normal map itself.

The recommended way is to bake the texture to a new, smaller, empty texture.
Baking this way is better than scaling, because it reads the normal from the mesh it can smartly decide what goes where.
It's recommended you use this for all texture adjusting, it preserves 20% more of the texture- on none 2D models- than any scaling formula I have ever used.
 
The down side is no real time resize.
 
You can do this with Blender or Xnormal.
Blender tut:

 , just took a quick look, it seems to be right.
https://docs.blender.org/manual/en/dev/render/blender_render/bake.html The wiki page on baking.
 
 
 
 

In a basic sense, yes you can just resize a normal map like a normal image and it will mostly work. However there's a few things to watch out for:

Like MJP stated here, most problems from just scaling is not noticeable by the human eye. If you only scale once it's fine, however by scaling over and over will accumulate errors to the point where they can be noticed.
 
Baking won't cause errors unless the UV changes dramatically -not moving around, that is harmless and is how atlasing works- or you attempt to make a small image larger.
 
 
A large saving factor that is often overlooked is just converting to .jpg, a .png to jpg can reduce the file size to 1/4 of the original; saving 1/2 on loading time.

Share this post


Link to post
Share on other sites

A large saving factor that is often overlooked is just converting to .jpg, a .png to jpg can reduce the file size to 1/4 of the original; saving 1/2 on loading time.

 

Never do that. JPG is lossy format. It's also not designed for normal maps but photos.

Share this post


Link to post
Share on other sites

I believe that will only work if I have the original high-poly mesh? In my case my only source for the normals is the normal map itself.

 

Yep, in that case baking is no option.

 

Some tips: If you use Photoshop, experiment with the various scaling filters.

 

If you do it yourself, the simplest high quality method to downscale to half resolution is:

Blur the image before you downscale. Horizantal pass: (NewPixel = (pixel*2 + leftPixel + rightPixel)/4  then a second pass verticaly.

Downscale as usual by averaging 4 blured pixels to the final value ( + Normalize in your case)

 

Mostly the blur produces much better images, but it always depends. (I guess quality is similar as with more complicated method described in NVidia paper).

 

Complication: You may want to exclude pixels outside the polygons if you don't have enough valid information around them. And if so, you also may need to add a new border and / or smooth discontiuities across UV seams. It becomes hairy there.

Share this post


Link to post
Share on other sites

Never do that. JPG is lossy format. It's also not designed for normal maps but photos.

 

What? The lossy format works on both photos and normal maps even if it wasn't made for normals.

 

To show the effects and to prove jpg is a good choice I made a test.

 

Method:

I baked the large Master normal map as a png with 15% compression, the most common used texture type among Indie developers.

I then proceeded to convert the image to jpg, scaled the image and again saved as png and jpg.

 

The image uploaded to Imgur is a png with 0% compression to keep it clean.

The size of the renders where decided on object size in game, also to keep it small here and it was rendered as a OpenGl render.

[spoiler]AMyRkBX.png[/spoiler]

 

As you can see that just converting the Normal to a jpg reduced the size by almost a quarter of the original even with out reducing the resolution of the texture.

Also to note the 2048 jpg is smaller in KB than the 512 png.

It's only in the last image that lossy compression starts to show, at the distance it will be in the game combined with a color texture this won't be noticed.

 

Here shows 3 of the ways to scale a image:

[spoiler] fnP2UoR.png [/spoiler]

The first one is the high poly baked to a 512 texture, the second is the 2048 texture baked to a 512 texture and the last is the 2048 scaled to a 512 texture.

 

This image shows just how minuscule the differences between them are.

Now you might be wondering what the problem with scaling is then. If you looked at the last one of the normal maps, you will see there is smudges of light yellow in the image.

Yellow will result in a normal map that darkens when the light shines on it and lightens when the light doesn't. Yellow tangent normal maps like these are used to fake subsurface because it appears that the light shines through the model; you can test this by inverting a normal map.

 

The yellow is a result of the image scaling, it won't matter if you only scale once. If you scaled from 2048 to 1024 then to 512 and converted to jpg, your model will look like there where holes punched into it.

Converting to jpg, also blurs colors into yellows, although smaller than scaling.

 

You generally don't want to do the actually resizing/averaging with 8 bits of precision. Ideally you'll have the full-res normal map available at 16-bit precision, and you'll do the filtering with 32-bit floating point operations. Otherwise errors can accumulate and give you poor results.

 

As mentioned here by MJP using a higher bit rate gives more colors reducing the side effect of the sampling.

 

Summary: Scaling's side effects are small if only done for the end product, baking from a large texture to small is better for working long term, converting/ compressing is better than scaling in most cases and lastly the visual effect is small.

 

I hope this made it more clear. It's mostly your artist that edits the work that has to concern them self with the long term effects.

Your artist will often provide you with a master texture, so that you can upload that to them when they make edits.

 

 

I believe that will only work if I have the original high-poly mesh? In my case my only source for the normals is the normal map itself.

 

Yep, in that case baking is no option.

 

https://drive.google.com/open?id=0B3hHgiNtHATdLWg4WHhJaXdsVkk

This zip has the Blend used for the test, on the first layer you will see the barrel. In the right corner should be a large "Bake" button.

Pressing the button will bake from the 2048 in the material to a 512 texture in the UV.

 

I often forget not everyone makes 3D models for a living, this is a large part of making textures so I assumed you would realize how it works after seeing the tutorial.

 

Also in the zip is all the files used here for those who want to test it them self.

 

Some tips: If you use Photoshop, experiment with the various scaling filters.   If you do it yourself, the simplest high quality method to downscale to half resolution is: Blur the image before you downscale. Horizantal pass: (NewPixel = (pixel*2 + leftPixel + rightPixel)/4  then a second pass verticaly. Downscale as usual by averaging 4 blured pixels to the final value ( + Normalize in your case)   Mostly the blur produces much better images, but it always depends. (I guess quality is similar as with more complicated method described in NVidia paper).

 

A smart choice, however most will mix colors and those that don't are rarely worth using.

Note that blur, morph, patching and smearing all mixes color, this means they mix data from two bordering pixels.

You don't want to mix colors with normal maps except for the end one used in the game.

Edited by Scouting Ninja

Share this post


Link to post
Share on other sites
https://drive.google.com/open?id=0B3hHgiNtHATdLWg4WHhJaXdsVkk This zip has the Blend used for the test, on the first layer you will see the barrel. In the right corner should be a large "Bake" button. Pressing the button will bake from the 2048 in the material to a 512 texture in the UV.

 

Did not look at the file, but i assume what you mean is:

Even if you do not have an original high res poly model, you still can use 3D software, load your game model and normal map and bake it to lower resolution.

That's great - did not thought of that, sorry. The 3D software will also take care of the difficult things like texels at seams :)

 

Edit: I guess you could even do this automatically with Blender and some batch file.

Edited by JoeJ

Share this post


Link to post
Share on other sites

It's only in the last image that lossy compression starts to show, at the distance it will be in the game combined with a color texture this won't be noticed.

 

Jpeg compression has visible artefacts in low frequencies (such as gradients etc.) and those are very needed to be precise in normal maps, as opposed to color.

 

Your profiling scenario has nicely demonstrated it actually :).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!