D3D11 Framework Update

posted in Lyost's Journal
Published March 10, 2013
Advertisement
While there is still plenty of work to do on my d3d11 framework, I thought I'd post an update on the current status. For what's currently in it:

  • keyboard input
  • mouse input
  • vertex shaders
  • pixel shaders
  • viewports
  • blend and depth stencil states
  • texturing (just loading from a file for now, will be expanding functionality later)
  • resizing/fullscreen toggling

A short list of things that I will be adding is (I have a longer list of tasks for my own reference):

  • rest of the shader types
  • model loading
  • high dpi mouse support
  • joystick/xbox controller support
  • deferred contexts
  • sprite support (DirectXTK looks like it might make this part easy, especially the text support)
  • networking lib

It would be further along, but two main factors slowed down development on it. The first and main factor was that I wasn't able to work on it for most of January due to work requiring a significant amount of extra hours. The second and more of a speed bump was there are several functions that MSDN notes that cannot be used in applications submitted to the Windows Store (the entire D3D11 reflection API for example). While I am a long way off from doing anything with the Windows Store, if ever, I figured it might make this framework more useful and reduce maintenance down the line if I followed the recommendations and requirements for it. Part of that is to compile shaders at build time instead of runtime, so I upgraded to VS2012 to get integrated support for that instead of trying to roll my own content pipeline (which is not off the table for other content types, just no immediate plans for that feature).

Converting the project from VS2010 to VS2012 needed four notable changes, the first three of which are straight forward:
1. Removing references to the June 2010 DirectX SDK from include and linker paths, since the header and library files are now part of the VS2012 installation and available in default directories.
2. Switching from xnamath to directxmath.h, which in addition to changing the name of the include file is also updating the namespace of the types in the file to be part of the DirectX namespace. Since the rest of the DirectX functions and types (i.e. ID3D11DeviceContext) are not in that namespace, it seems a bit inconsistent to have some functions/types in the namespace and others outside of it.
3. The next easy change was to do build time shader compilation and change the runtime loading mechanism to load the .cso file. This was a matter of splitting the vertex and pixel shaders into separate .hlsl files and setting their properties appropriately (mainly "Entrypoint Name" and "Shader Type", I also like "Treat Warnings As Errors" on). The ContentManager class also needed its shader loading mechanism updated. Instead of calling D3DX11CompileFromFile in the CompileShader function, the CompileShader function was changed to LoadFile, which does a binary load of a file into memory. After that, the code was the same of calling CreateVertexShader or CreatePixelShader on the ID3D11Device instance. There was a little wrinkle in these edits for current working directory when debugging vs the location of the .cso files. The .cso files were placed in $(OutDir) with the results of compiling the other projects in the solution (i.e. .lib and .exe files, not the .obj for each code file), whereas the working directory when debugging was the code directory for the test program. This meant that just specifying the filename without a path would fail to find the file at runtime. So either the output path for compiling the shaders needed to change or the working directory when debugging needed to change. I chose to change the working directory since running in the same directory as the .exe file is a better match to how an application would be used outside of development. This also meant that the texture I was using needed to be copied to $(OutDir) as well, which was easy enough to add as a "Post-Build Event" (though I initially put in the copy command as "cp" since I'm used to Linux, but it didn't take long to remember the Windows name for the command is "copy").
4. This was the not so straight forward change, texture loading. Previously I was using D3DX11CreateShaderResourceViewFromFile, which is not available in VS2012. To get texture loading working again, I had to do a bit of research. I didn't want to get bogged down into loading various image file formats and reinventing the wheel (this entire project can probably be called reinventing the wheel, but the point of it is more about applying D3D11 and expanding my understanding of it). Luckily, right on the MSDN page for D3DX11CreateShaderResourceViewFromFile, there are links to recommended replacements. The recommended runtime replacement is DirectXTK, which compiled right out of the box for me, no playing with settings or paths to get it to compile. For using it, there was one problem I ran into. Initially I was using seafloor.dds from Tutorial 7 of the June 2010 DirectX Sample Browser. The DirectXTK Simple Win32 Sample comes with a texture file of the same name and looks the same. However, there is at least a slight difference in the file contents. The one from the June 2010 sample failed to load with an ERROR_NOT_SUPPORTED issue, but the one from the DirectXTK sample works. After dealing with that, I decided to get rid of third party content, and am now using a debug texture I had lying around which is a .png file and it works with DirectXTK just fine.

Backing up for a moment to before I did the VS2012 upgrade, in my previous journal entry of "D3D11 Port of Particle System Rendering", I mentioned that I wanted to look into other ways for creating the input layout description for a vertex shader. The first thing I looked into was the shader reflection API, so that the input layout would automatically be determined by the vertex shader code. This was obviously before I knew the API would not be available to Windows Store applications or decided to follow those requirements. Aside from that, I ran into a different issue that prevented me from using it here. The vertex shader code knows what its inputs are, but it doesn't know which are per-vertex and which are per-instance. From looking at D3D11_INPUT_ELEMENT_DESC, knowing per-vertex or per-instance is important to set the InputSlotClass and InstanceDataStepRate fields correctly. Since reflection wasn't an option here, I created a class to manage the input layout, not surprisingly named InputLayout. It does avoid the simple mistake I had done in the previous journal entry of incorrectly computing the aligned byte offsets, as well as avoiding spelling mistakes for semantic names, and avoiding attempts to set more input layouts than were allocated. In its current form, it does force non-instance vertex buffers into slot 0 and instance into slot 1, precluding any additional vertex buffers, which I might revisit later once I hit a scenario that requires more.

For the configuring the D3D11 rendering pipeline, my approach has been to get everything into a known state. Taking the example of setting a constant buffer on the vertex shader stage, if a particular constant buffer is not set, then whatever the previous value was will be present the next time the shader is invoked. To avoid these stale states, in the framework when a VertexShader instance's MakeActive function is called, it sets all of the constant buffers to the last value the VertexShader instance received for them (the actual function for setting a constant buffer is in ShaderBase). Taking this known state design a step further is where the RenderPipeline class comes in (once I get the parts done and integrated into it anyways). The plan is for when an instance's MakeActive function is called that the vertex shader, pixel shader, their constant buffers, etc all are active on the provided ID3D11DeviceContext. An obvious improvement to setting all of the pipeline info each time would be to only set the parts that don't match the last value for the ID3D11DeviceContext instance. However, I've been trying to get things working before I start looking into performance improvements.

Below is a screen shot of the test program, which is using instance rendering for 2 cubes, and 4 viewports to display them from different angles.
[sharedmedia=gallery:images:3470]


d3d11_framework.zip is a zip file that has the source for the framework (MIT license, like normal for me). Though it does not have DirectXTK in it, which that can be found here. For adding it in, put the DirectXTK include files in the framework's external_tools/DirectXTK/public_inc/ directory, and once build the DirectXTK.lib file in external_tools/DirectXTK/lib/Debug/ or external_tools/DirectXTK/lib/Release/ depending on which configuration you built it for.
1 likes 1 comments

Comments

eppo

To clarify: You'll need to compile the DirectXTK lib before you can compile the framework (using its own VS solution). After that, move the generated files from 'DirectXTK/bin/...' to 'DirectXTK/lib'.

You may want to look into texture(-buffer) instancing (as opposed to stream-based instancing). as it may integrate better with your pipeline.

March 20, 2013 07:49 AM
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Advertisement