Jump to content
  • Advertisement
Sign in to follow this  
cephalo

DX11 [SharpDX]Loading Multiple Images into a Texture Array

This topic is 1912 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

==OP==

Back in the DX9 days before texture arrays, loading a texture was as simple as Texture2D.FromFile(). Now for DX11, things have become vastly more complicated. I've been googling this issue and I've been seeing various contradictory information that indicates that certain convenient helper functions have gone legacy. I looked at the way the rastertek multitexturing tutorial does it, and I am totally confused by it. It seems to load bitmap files directly into shader resource views and makes an array of shader resource views and that's the texture array... In any case I can't find the same functions in the latest version of SharpDX. I've seen some examples of texture loading with WIC, but nothing for texture arrays.

 

My question is, what would be the proper way as of today, 5/28/2013, to load a series of png files into a single texture array with automatically generated mipmaps? Is there a simple way to do that or are we talking about a thousand lines of code before we can pass this to a shader?

======

 

Here is my solution:

    public static class TextureUtilities
    {
        public static Texture2D LoadTextureArray(string[] names)
        {
            ImagingFactory factory = new ImagingFactory();
            List<BitmapSource> bitmaps = new List<BitmapSource>();

            foreach (string name in names)
            {
                BitmapSource bitmap = LoadBitmap(factory, name);
                bitmaps.Add(bitmap);
            }

            Texture2D texArray = new Texture2D(Game.GraphicsDevice, new Texture2DDescription()
            {
                Width = bitmaps[0].Size.Width,
                Height = bitmaps[0].Size.Height,
                ArraySize = names.Length,
                BindFlags = BindFlags.ShaderResource | BindFlags.RenderTarget,
                Usage = SharpDX.Direct3D11.ResourceUsage.Default,
                CpuAccessFlags = SharpDX.Direct3D11.CpuAccessFlags.None,
                Format = SharpDX.DXGI.Format.R8G8B8A8_UNorm,
                MipLevels = CountMips(bitmaps[0].Size.Width, bitmaps[0].Size.Height),
                OptionFlags = SharpDX.Direct3D11.ResourceOptionFlags.GenerateMipMaps,
                SampleDescription = new SharpDX.DXGI.SampleDescription(1, 0),
            });
            int stride = bitmaps[0].Size.Width * 4;
            for(int i=0;i < bitmaps.Count; ++i)
            {
                var buffer = new DataStream(bitmaps[0].Size.Height * stride,true,true);
                bitmaps[i].CopyPixels(stride, buffer);
                DataBox box = new DataBox(buffer.DataPointer, stride, 1);

                Game.GraphicsDevice.ImmediateContext.UpdateSubresource(box, texArray, Resource.CalculateSubResourceIndex(0, i, CountMips(bitmaps[0].Size.Width, bitmaps[0].Size.Height)));
                buffer.Dispose();
            }
            ShaderResourceView view = new ShaderResourceView(Game.GraphicsDevice, texArray);
            Game.GraphicsDevice.ImmediateContext.GenerateMips(view);

            view.Dispose();
            return texArray;
        }

        public static BitmapSource LoadBitmap(ImagingFactory factory, string filename)
        {
            var bitmapDecoder = new SharpDX.WIC.BitmapDecoder(
                factory,
                filename,
                SharpDX.WIC.DecodeOptions.CacheOnDemand
                );

            var result = new SharpDX.WIC.FormatConverter(factory);

            result.Initialize(
                bitmapDecoder.GetFrame(0),
                SharpDX.WIC.PixelFormat.Format32bppPRGBA,
                SharpDX.WIC.BitmapDitherType.None,
                null,
                0.0,
                SharpDX.WIC.BitmapPaletteType.Custom);

            return result;
        }

        private static int CountMips(int width, int height)
        {
            //lifted from SharpDX Toolkit
            int mipLevels = 1;

            while (height > 1 || width > 1)
            {
                ++mipLevels;

                if (height > 1)
                    height >>= 1;

                if (width > 1)
                    width >>= 1;
            }

            return mipLevels;
        }

    }

Edited by cephalo

Share this post


Link to post
Share on other sites
Advertisement

The simplest way i can think of is this

 

ID3D11ShaderResourceView*  textures[10];

for(int i=0; i<10; i++)
   D3DX11CreateShaderResourceViewFromFile(device, fileName, NULL, NULL, &textures, NULL);

ID3D11DeviceContext* deviceContext;

deviceContext->PSSetShaderResources(0, 10, textures);

not sure if this is what you are asking.

Share this post


Link to post
Share on other sites
Tutorial #17 ? It is misleading. It's not using a real texture array, but - erm - an array of textures tongue.png. If you look at the shader it has Texture2D shaderTextures[2]. This is not even array IMO, because it fills two slots. A real one would be Texture2DArray in HLSL.

Problem with a image format like PNG is that they do not support multiple images per file. At least as far as I know. Anyway: The only legacy function which supports that is in conjunction with DDS. Haven't used arrays seriously so far but it works fine for TextureCube (cube maps), which is a texture array (just with a special flag: ResourceOptionFlags.TextureCube and a special resource type in HLSL: TextureCube).

Still I don't see the need to write that a lot of code, even without the legacy functions. With either System.Drawing.Imaging or WIC you should be able to load and access the raw image data, which you then fill in slicewise or put together before creation/initialization. Or use the legacy functions and fill the slices with Context.CopySubResourceRegion. Just throwing around some ideas wink.png

PS: How's Taffy Mountains ? Edited by unbird

Share this post


Link to post
Share on other sites

Aha! I knew this was not going to be easy. Tell me if my plan is sane. I create a bunch of images to be used together with GIMP, then I make all of the mipmaps manually by resizing, Then I write some code that can collate all these hundreds of image files as raw data. For each image, I put all the mip levels one after the other into the massive single buffer that holds everything end to end like so:

 

image0 mip0
   mip1
   mip2
...
   mip9
image1 mip0
   mip1
   mip2
...
   mip9
image2 etc..

Blegh! So I really have to roll my own system for putting texture arrays together? See you all next month!

Share this post


Link to post
Share on other sites
That would be "only" saturday tongue.png

Can't speak for SharpDX (still using SlimDX) so yeah, I think that's it. And please apologize I can't help you with the structure more, haven't touched the "feed the whole blob" yet. Getting the offsets and pitches right smells like a "good" debugging session.

But could you live with the autogenerated mipmaps ? Grant your texture the render target view flag and generate them with Context.GenerateMips, filling the top level first.

Alternatively: Update the slices and mips separately, and then save the whole stuff with the legacy function or as a raw blob. Edit: There's a helper function you don't want to miss: Resource.CalculateSubResourceIndex. Edited by unbird

Share this post


Link to post
Share on other sites

You don't suppose there is some kind of weird header between images do you? I don't know if you ever worked with the old windows bitmaps, DIBS, HBITMAPS or other such long repressed nightmares, but it wasn't so simple as rgba, rgba, rgba .... etc. 

Share this post


Link to post
Share on other sites
Careful. You don't feed D3D11 image file data, but raw data, even if e.g. the legacy function make you think you do. So for texture creation or (sub)updateing there are no headers involved. But of course the format has to match, so a swizzle or type conversion might be needed.

That's why one of my suggestions was using the .NET library. As soon as you have the image loaded you can access the raw pixel data with Bitmap.LockBits.

Admittedly, I actually never wrote a image loader myself (maybe some easy tga IIRC). But I surely never will - hopefully. Using .NET or D3DX (or a custom image library) to setup some preprocessing seems the easiest way.

Share this post


Link to post
Share on other sites

PS: How's Taffy Mountains ?

 

Ah yes, I'm getting ready to test out the river system. I have a series of textures for normal/displacement that I made programmatically for each kind of river junction such as:

[attachment=16009:RiverJuncs.jpg]

 

There's three river sizes that can run along the hex vertices according to certain overly complicated rules. This time around I am also trying to support rivers going into and out of lakes, which will be interesting. It won't just have one ocean level and that's water, it will have higher altitude bodies of water and particle effect waterfalls etc.

 

That's all theory for now! First I have to load my textures to see if any of this is feasible!

 

Share this post


Link to post
Share on other sites

Careful. You don't feed D3D11 image file data, but raw data, even if e.g. the legacy function make you think you do. So for texture creation or (sub)updateing there are no headers involved. But of course the format has to match, so a swizzle or type conversion might be needed.

That's why one of my suggestions was using the .NET library. As soon as you have the image loaded you can access the raw pixel data with Bitmap.LockBits.

Admittedly, I actually never wrote a image loader myself (maybe some easy tga IIRC). But I surely never will - hopefully. Using .NET or D3DX (or a custom image library) to setup some preprocessing seems the easiest way.

 

I can probably handle this, I just didn't want to overlook some easy solution before I worked really, really hard rolling my own. The GIMP dds plugin is planned to someday have a dds export that supports texture arrays, but alas I was born two years too early and it isn't here when I needed it.

Share this post


Link to post
Share on other sites
Nice. And ambitious. (Edit: I mean the rivers, of course).

Ok, below you see a helper functions I use which works for R8G8B8A8_UNorm and B8G8R8A8_UNorm. As said, SlimDX, so it might need some changes. Use at your own risk, but it should get you a start.
// helper swizzle function to make DXGI.Format.R8G8B8A8_UNorm compatible with System.Drawing.Imaging.PixelFormat.Format32bppArgb
// (well, the << 0 are silly, actually, but was verbose here)
public static uint ARGBToABGR(uint argb)
{
    return
        ((argb & 0xff000000) >> 0) |
        ((argb & 0x00ff0000) >> 16) |
        ((argb & 0x0000ff00) << 0) |
        ((argb & 0x000000ff) << 16);
}

// ********* EDIT: DELETED UNUSABLE FUNCTION SEE POST FURTHER BELOW ********

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!