Sign in to follow this  
iEat_Babies

Mip maps..... no understanding

Recommended Posts

Hey, could someone please explain all about mip maps?
I really don't get it and I really should. Some help would really be appreciated!
All I really know is this:
[code]Texture xTexture0;
sampler TextureSampler0 = sampler_state {
texture = <xTexture0>;
magfilter = anisotropic;
minfilter = anisotropic;
mipfilter = linear;
AddressU = mirror;
AddressV = mirror;
};[/code]


Compared to this:
[code]Texture xTexture0;
sampler TextureSampler0 = sampler_state {
texture = <xTexture0>;
magfilter = linear;
minfilter = linear;
mipfilter = linear;
AddressU = mirror;
AddressV = mirror;
};[/code]


The first one looks better!

Once again, help would be great!

Thanks!

Share this post


Link to post
Share on other sites
[quote name='iEat_Babies' timestamp='1310274523' post='4833242']
Hey, could someone please explain all about mip maps?[/quote]

What you posted is no so much about mipmaps, as it is about anisotropic filtering of the mipmaps.

Anisotropic filtering samples the texture more when the texture's surface is at a sharper angle relative to the camera, which reduces blur.

This article explains mip mapping an anisotropic filtering in more detail:

[url="http://www.extremetech.com/computing/78546-antialiasing-and-anisotropic-filtering-explained"]http://www.extremete...ering-explained[/url]

Share this post


Link to post
Share on other sites
The mipmaps themselves are (typically) a series of down-resed copies of the original image, each mip is half the size of the one above it. A full mip chain goes all the way down to 1x1 (regardless if the image is square or not). A complete mip chain is not technically required, but not providing one can cause some pretty bad performance problems in some cases.

The lower the resolution of the mipmap, the better it maps to the cache on the GPU, which speeds things up (quite a lot actually). The hardware normally automatically picks which mipmap level to display quite well, except when working in screen space style effects. The filtering modes work in three 'dimensions':

mag filter - filter applied when the image is up-resed (typically when you are already rendering the largest mip level and there isn't another one to switch to)
min filter - filter applied when the image is down-resed
mip filter - filter applied between mip levels (on, off, linear)


When the mip filter is set to linear, the hardware picks a blend of of two mipmap levels to display, so the effect looks more seamless. The UV you feed into the fetch causes the hardware to fetch the color from two miplevels, and it automatically crossfades them together. If you set the mipfilter to nearest, it will only fetch one mip, and this will typically generate seams in the world you render, where the resolution of the texture jumps (as the hardware selects them automatically in most cases). This is faster however since it only has to do half the work.

When the mag filter is set to linear, the hardware fetches a 2x2 block of pixels from a single mip level, and crossfades them together with a biliinear filter. If the filter is set to anisotropic, it uses a proprietary multi-sample kernel to sample multiple sets of pixels from the image in various patterns. The number of samples corresponds to the anisotropic setting (from 2 to 16), at a substantial cost to performance in most cases. However it helps maintain the image quality when the polygons are nearly parallel to the camera, and this can be pretty important for text on signs, stripes on roads, and other objects that tend to mip to transparent values too fast (chain link fences).

You can set the hardware in quite a few configurations, as these settings are more or less mutually exclusive with each other.

Share this post


Link to post
Share on other sites
Thanks for all the info! Those articles were good and thanks for that great explanation Zoner!
But now I have 2 questions:
What are the different values I can specify for "mag filter" "min filter" and "mip filter"?

and

How can I implement mipmapping? I am using XNA 4.0. Do I have to do the calculations myself in HLSL? And how do I provide the different mipmap layers? Are each a separate texture that I would have to sample in the pixel shader?

Share this post


Link to post
Share on other sites
[quote name='iEat_Babies' timestamp='1310278307' post='4833259']
Thanks for all the info! Those articles were good and thanks for that great explanation Zoner!
But now I have 2 questions:
What are the different values I can specify for "mag filter" "min filter" and "mip filter"?

and

How can I implement mipmapping? I am using XNA 4.0. Do I have to do the calculations myself in HLSL? And how do I provide the different mipmap layers? Are each a separate texture that I would have to sample in the pixel shader?
[/quote]

Mipmaps are actually subresources of the texture resource (mip level 0 being your main image, subsequent levels being the rest of the chain), so not only are they apart of the texture, but you provide them when you create it. E.g. if you manually create a Texture2D, you'd call SetData<T>() for each mip level. In most cases however, if you have an image source file then build it with the XNA content pipeline, the mip chain will be created for you automatically.

As for the mag/min/mip filter values, take a look at the [url="http://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.samplerstate_members.aspx"]SamplerState[/url]'s filter property and [url="http://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.texturefilter.aspx"]TextureFilter[/url] enum. in XNA 4.0 (and DX10+) you specify the mag/min/mip filter as a whole unit, so TextureFilter.Linear would set mag/min/mip to linear. Setting these [url="http://msdn.microsoft.com/en-us/library/bb509644%28v=vs.85%29.aspx"]directly in HLSL[/url] still uses the DX9 syntax though (scroll down to the Remarks section for an example).

Share this post


Link to post
Share on other sites
[quote name='Starnick' timestamp='1310284821' post='4833267']Mipmaps are actually subresources of the texture resource (mip level 0 being your main image, subsequent levels being the rest of the chain), so not only are they apart of the texture, but you provide them when you create it. E.g. if you manually create a Texture2D, you'd call SetData<T>() for each mip level.[/quote]

I dont think that the OP specified whether he was using XNA or not. If you are using XNA, mipmaps are automatically created if you draw to a RenderTarget2D that has the mipmap property set to true.

Edit: Why the downvotes?

Share this post


Link to post
Share on other sites
*points above to the quote*

"I am using XNA 4.0"

:wink:

But Olhovsky is right with render targets, when you resolve the target to be used as a shader resource, the framework will generate the mipmaps for you, so you don't have to worry about them. I was referring to mipmaps in context of manually creating a texture and filling it (e.g. making a run-time texture loader) or using the content pipeline, and not render-to-texture.

Really in XNA, you only have to worry about generating mipmaps and setting that data if you create a texture manually, and are loading the data manually. I'd imagine this is not a usual case however.

Share this post


Link to post
Share on other sites
In XNA 4.0,
If I just call
[code]myTexture = Content.Load<Texture2D>("someTexture");[/code]

and I have some indexed primitives and I draw them:
[code] // Set shader parameters
effect.CurrentTechnique = effect.Techniques["Textured"];

effect.Parameters["xWorld"].SetValue(worldMatrix);
effect.Parameters["xView"].SetValue(currentViewMatrix);
effect.Parameters["xProjection"].SetValue(currentProjMatrix);
effect.Parameters["xEnableLighting"].SetValue(true);
effect.Parameters["xAmbient"].SetValue(0.4f);
effect.Parameters["xLightDirection"].SetValue(new Vector3(-0.5f, -1, -0.5f));

effect.Parameters["xTexture"].SetValue(myTexture);

device.Indices = iBuffer;
device.SetVertexBuffer(vBuffer);

foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
pass.Apply();

device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vBuffer.VertexCount, 0, iBuffer.IndexCount / 3);
}[/code]


Then in my pixel shader I just do really basic texture sampling:
[code]TexPixelToFrame TexturedPS(TexVertexToPixel PSIn)
{
TexPixelToFrame Output = (TexPixelToFrame)0;

Output.Color = tex2D(TextureSampler, PSIn.TextureCoords);
Output.Color *= saturate(PSIn.LightingFactor + xAmbient);

return Output;
}[/code]


And "TextureSampler" looks like this:
[code]Texture xTexture;
sampler TextureSampler = sampler_state {
texture = <xTexture>;
magfilter = anisotropic;
minfilter = anisotropic;
mipfilter = linear;
AddressU = mirror;
AddressV = mirror;
};[/code]


..... does this already do all the mipmapping for me? And is it already doing anisotropic filtering?

Thanks for the clarification...

Share this post


Link to post
Share on other sites
Why the downvote on my post above? Someone pointed out a rare case for creating mipmaps in XNA, so I pointed out the general case solution for creating mipmaps in XNA, which it turns out, the OP is using.

If there's any misinformation in my downvoted post, let me know. I want to learn too.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this