Jump to content

  • Log In with Google      Sign In   
  • Create Account

Richards Software Ramblings



Drawing a Loading Screen with Direct2D in SlimDX

Posted by , 30 September 2013 - - - - - - · 746 views
C#, SlimDX, DirectX11, Direct2D

I know that I have been saying that I will cover random terrain generation in my next post for several posts now, and if all goes well, that post will be published today or tomorrow. First, though, we will talk about Direct2D, and using it to draw a loading screen, which we will display while the terrain is being generated to give some indication of the progress of our terrain generation algorithm.


Direct2D is a new 2D rendering API from Microsoft built on top of DirectX 10 (or DirectX 11 in Windows 8). It provides functionality for rendering bitmaps, vector graphics and text in screen-space using hardware acceleration. It is positioned as a successor to the GDI rendering interface, and, so far as I can tell, the old DirectDraw API from earlier versions of DirectX. According to the documentation, it should be possible to use Direct2D to render 2D elements to a Direct3D 11 render target, although I have had some difficulties in actually getting that use-case to work. It does appear to work excellently for pure 2D rendering, say for menu or loading screens, with a simpler syntax than using Direct3D with an orthographic projection.


We will be adding Direct2D support to our base application class D3DApp, and use Direct2D to render a progress bar with some status text during our initialization stage, as we load and generate the Direct3D resources for our scene. Please note that the implementation presented here should only be used while there is no active Direct3D rendering occurring; due to my difficulties in getting Direct2D/Direct3D interoperation to work correctly, Direct2D will be using a different backbuffer than Direct3D, so interleaving Direct2D drawing with Direct3D rendering will result in some very noticeable flickering when the backbuffers switch.



The inspiration for this example comes from Chapter 5 of Carl Granberg’s Programming an RTS Game with Direct3D, where a similar Progress screen is implemented using DirectX9 and the fixed function pipeline. With the removal of the ID3DXFont interface from newer versions of DirectX, as well as the lack of the ability to clear just a portion of a DirectX11 DeviceContext’s render target, a straight conversion of that code would require some fairly heavy lifting to implement in Direct3D 11, and so we will be using Direct2D instead. The full code for this example can be found on my GitHub repository, athttps://github.com/ericrrichards/dx11.git, and is implemented in the TerrainDemo and RandomTerrainDemo projects.

Posted Image


Read More...



Terrain LOD for DirectX 10 Graphics Cards, using SlimDX and Direct3D 11

Posted by , 27 September 2013 - - - - - - · 707 views
C#, SlimDX, DirectX11, DirectX10 and 6 more...

Last time, we discussed terrain rendering, using the tessellation stages of the GPU to render the terrain mesh with distance-based LOD. That method required a DX11-compliant graphics card, since the Hull and Domain shader stages are new to Direct3D11. According to the latest Steam Hardware survey, nearly 65% of gamers have a DX11 graphics card, which is certainly the majority of potential users, and only likely to increase in the future. Of the remaining 35% of gamers, 31% are still using DX10 graphics cards. While we can safely ignore the small percentage of the market that is still limping along on DX9 cards (I myself still have an old laptop with a GeForce Go 7400 in my oldest laptop, but that machine is seven years old and on its last legs), restricting ourselves to only DX 11 cards cuts out a third of potential users of your application. For that reason, I’m going to cover an alternative, CPU-based implementation of our previous LOD terrain rendering example. If you have the option, I would suggest that you only bother with the previous DX11 method, as tessellating the terrain mesh yourself on the CPU is relatively more complex, prone to error, less performant, and produces a somewhat lower quality result; if you must support DX10 graphics cards, however, this method or one similar to it will do the job, while the hull/domain shader method will not.



We will be implementing this rendering method as an additional render path in our Terrain class, if we detect that the user has a DX10 compatible graphics card. This allows us to reuse a large chunk of the previous code. For the rest, we will adapt portions of the HLSL shader code that we previously implemented into C#, as well as use some inspirations from Chapter 4 of Carl Granberg’s Programming an RTS Game with Direct3D. The full code for this example can be found at my GitHub repository, https://github.com/ericrrichards/dx11.git, under the TerrainDemo project.


Posted Image


Read More...



Dynamic Terrain Rendering with SlimDX and Direct3D 11

Posted by , 21 September 2013 - - - - - - · 968 views
C#, DirectX 11, SlimDX, Terrain and 4 more...

A common task for strategy and other games with an outdoor setting is the rendering of the terrain for the level. Probably the most convenient way to model a terrain is to create a triangular grid, and then perturb the y-coordinates of the vertices to match the desired elevations. This elevation data can be determined by using a mathematical function, as we have done in our previous examples, or by sampling an array or texture known as a heightmap. Using a heightmap to describe the terrain elevations allows us more fine-grain control over the details of our terrain, and also allows us to define the terrain easily, either using a procedural method to create random heightmaps, or by creating an image in a paint program.


Because a terrain can be very large, we will want to optimize the rendering of it as much as possible. One easy way to save rendering cycles is to only draw the vertices of the terrain that can be seen by the player, using frustum culling techniques similar to those we have already covered. Another way is to render the mesh using a variable level of detail, by using the Hull and Domain shaders to render the terrain mesh with more polygons near the camera, and fewer in the distance, in a manner similar to that we used for our Displacement mapping effect. Combining the two techniques allows us to render a very large terrain, with a very high level of detail, at a high frame rate, although it does limit us to running on DirectX 11 compliant graphics cards.


We will also use a technique called texture splatting to render our terrain with multiple textures in a single rendering call. This technique involves using a separate texture, called a blend map, in addition to the diffuse textures that are applied to the mesh, in order to define which texture is applied to which portion of the mesh.


The code for this example was adapted from Chapter 19 of Frank Luna’s Introduction to 3D Game Programming with Direct3D 11.0, with some additional inspirations from Chapter 4 of Carl Granberg’s Programming an RTS Game with Direct3D. The full source for this example can be downloaded from my GitHub repository, at https://github.com/ericrrichards/dx11.git, under the TerrainDemo project.

Posted Image


Read More...



Diving into the Tessellation Stages of Direct3D 11

Posted by , 17 September 2013 - - - - - - · 818 views
C#, Direct3D 11, SlimDX and 3 more...

In our last example on normal mapping and displacement mapping, we made use of the new Direct3D 11 tessellation stages when implementing our displacement mapping effect. For the purposes of the example, we did not examine too closely the concepts involved in making use of these new features, namely the Hull and Domain shaders. These new shader types are sufficiently complicated that they deserve a separate treatment of their own, particularly since we will continue to make use of them for more complicated effects in the future.


The Hull and Domain shaders are covered in Chapter 13 of Frank Luna’s Introduction to 3D Game Programming with Direct3D 11.0, which I had previously skipped over. Rather than use the example from that chapter, I am going to use the shader effect we developed for our last example instead, so that we can dive into the details of how the hull and domain shaders work in the context of a useful example that we have some background with.


The primary motivation for using the tessellation stages is to offload work from the the CPU and main memory onto the GPU. We have already looked at a couple of the benefits of this technique in our previous post, but some of the advantages of using the tessellation stages are:
  • We can use a lower detail mesh, and specify additional detail using less memory-intensive techniques, like the displacement mapping technique presented earlier, to produce the final, high-quality mesh that is displayed.
  • We can adjust the level of detail of a mesh on-the-fly, depending on the distance of the mesh from the camera or other criteria that we define.
  • We can perform expensive calculations, like collisions and physics calculations, on the simplified mesh stored in main memory, and still render the highly-detailed generated mesh.
The Tessellation Stages
The tessellation stages sit in the graphics pipeline between the vertex shader and the geometry shader. When we render using the tessellation stages, the vertices created by the vertex shader are not really the vertices that will be rendered to the screen; instead, they are control points which define a triangular or quad patch, which will be further refined by the tessellation stages into vertices. For most of our usages, we will either be working with triangular patches, with 3 control points, or quad patches, with 4 control points, which correspond to the corner vertices of the triangle or quad. Direct3D 11 supports patches with up to 32 control points, which might be suitable for rendering meshes based on Bezier curves.

The tessellation stages can be broken down into three component stages:
  • Hull Shader Stage – The hull shader operates on each control point for a geometry patch, and can add, remove or modify its input control points before passing the patch onto the the tessellator stage. The Hull shader also calculates the tessellation factors for a patch, which instruct the tessellator stage how to break the patch up into individual vertices. The hull shader is fully programmable, meaning that we need to define an HLSL function that will be evaluated to construct the patch control points and tessellation factors.
  • Tessellator Stage – The tessellator stage is a fixed-function (meaning that we do not have to write a shader for it) stage, which samples the input patch and generates a set of vertices that divide the patch, according to the tessellation factors supplied by the hull shader and a partitioning scheme, which defines the algorithm used to subdivide the patch. Vertices created by the tessellator are normalized; i.e. quad patch vertices are specified by referring to them by their (u,v) coordinates on the surface of the quad, while triangle patch vertices use barycentric coordinates to specify their location within the triangle patch.
  • Domain Shader Stage – The domain shader is a programmable stage (we need to write a shader function for it), which operates on the normalized vertices input from the tessellator stage, and maps them into their final positions within the patch. Typically, the domain shader will interpolate the final vertex value from the patch control points using the uv or barycentric coordinates output by the tessellator. The output vertices from the domain shader will then be passed along to the next stage in the pipeline, either the geometry shader or the pixel shader.

With these definitions out of the way, we can now dive into the displacement mapping effect from our previous example and examine just how the tessellation stages generate the displacement mapped geometry we see on the screen.


Read More...



Bump and Displacement Mapping with SlimDX and Direct3D 11

Posted by , 16 September 2013 - - - - - - · 640 views
C#, SlimDX, Direct3D 11 and 2 more...
Today, we are going to cover a couple of additional techniques that we can use to achieve more realistic lighting in our 3D scenes. Going back to our first discussion of lighting, recall that thus far, we have been using per-pixel, Phong lighting. This style of lighting was an improvement upon the earlier method ofGourad lighting, by interpolating the vertex normals over the resulting surface pixels, and calculating the color of an object per-pixel, rather than per-vertex. Generally, the Phong model gives us good results, but it is limited, in that we can only specify the normals to be interpolated from at the vertices. For objects that should appear smooth, this is sufficient to give realistic-looking lighting; for surfaces that have more uneven textures applied to them, the illusion can break down, since the specular highlights computed from the interpolated normals will not match up with the apparent topology of the surface.

Posted Image

In the screenshot above, you can see that the highlights on the nearest column are very smooth, and match the geometry of the cylinder. However, the column has a texture applied that makes it appear to be constructed out of stone blocks, jointed with mortar. In real life, such a material would have all kinds of nooks and crannies and deformities that would affect the way light hits the surface and create much more irregular highlights than in the image above. Ideally, we would want to model those surface details in our scene, for the greatest realism. This is the motivation for the techniques we are going to discuss today.

One technique to improve the lighting of textured objects is called bump or normal mapping. Instead of just using the interpolated pixel normal, we will combine it with a normal sampled from a special texture, called a normal map, which allows us to match the per-pixel normal to the perceived surface texture, and achieve more believable lighting.

The other technique is called displacement mapping. Similarly, we use an additional texture to specify the per-texel surface details, but this time, rather than a surface normal, the texture, called a displacement map or heightmap, stores an offset that indicates how much the texel sticks out or is sunken in from its base position. We use this offset to modify the position of the vertices of an object along the vertex normal. For best results, we can increase the tessellation of the mesh using a domain shader, so that the vertex resolution of our mesh is as great as the resolution of our heightmap. Displacement mapping is often combined with normal mapping, for the highest level of realism.

Posted Image
Normal mapped columns
Posted Image
Displacement mapped columns

This example is based off of Chapter 18 of Frank Luna’s Introduction to 3D Game Programming with Direct3D 11.0. You can download the full source for this example from my GitHub repository, athttps://github.com/ericrrichards/dx11.git, under the NormalDisplacementMaps project.

NOTE: You will need to have a DirectX 11 compatible video card in order to use the displacement mapping method presented here, as it makes use of the Domain and Hull shaders, which are new to DX 11.

Read More...


Dynamic Environmental Reflections in Direct3D 11 and C#

Posted by , 11 September 2013 - - - - - - · 563 views
C#, DirectX 11, SlimDX and 3 more...

Last time, we looked at using cube maps to render a skybox around our 3D scenes, and also how to use that sky cubemap to render some environmental reflections onto our scene objects. While this method of rendering reflections is relatively cheap, performance-wise and can give an additional touch of realism to background geometry, it has some serious limitations if you look at it too closely. For one, none of our local scene geometry is captured in the sky cubemap, so, for instance, you can look at our reflective skull in the center and see the reflections of the distant mountains, which should be occluded by the surrounding columns. This deficiency can be overlooked for minor details, or for surfaces with low reflectivity, but it really sticks out if you have a large, highly reflective surface. Additionally, because we are using the same cubemap for all objects, the reflections at any object in our scene are not totally accurate, as our cubemap sampling technique does not differentiate on the position of the environment mapped object in the scene.


The solution to these issues is to render a cube map, at runtime, for each reflective object using Direct3D. By rendering the cubemap for each object on the fly, we can incorporate all of the visible scene details, (characters, geometry, particle effects, etc) in the reflection, which looks much more realistic. This is, of course, at the cost of the additional overhead involved in rendering these additional cubemaps each frame, as we have to effectively render the whole scene six times for each object that requires dynamic reflections.


This example is based on the second portion of Chapter 17 of Frank Luna’s Introduction to 3D Game Programming with Direct3D 11.0, with the code ported to C# and SlimDX from the native C++ used in the original example. You can download the full source for this example from my GitHub repository, at https://github.com/ericrrichards/dx11.git, under the DynamicCubeMap project.


Posted Image


Read More...



Skyboxes and Environmental Reflections using Cube Maps in Direct3D 11 and SlimDX

Posted by , 08 September 2013 - - - - - - · 647 views
C#, SlimDX, Direct3D 11 and 4 more...

This time, we are going to take a look at a special class of texture, the cube map, and a couple of the common applications for cube maps, skyboxes and environment-mapped reflections. Skyboxes allow us to model far away details, like the sky or distant scenery, to create a sense that the world is more expansive than just our scene geometry, in an inexpensive way. Environment-mapped reflections allow us to model reflections on surfaces that are irregular or curved, rather than on flat, planar surfaces as in our Mirror Demo.


The code for this example is adapted from the first part of Chapter 17 of Frank Luna’s Introduction to 3D Game Programming with Direct3D 11.0. You can download the full source for this example from my GitHub repository, https://github.com/ericrrichards/dx11.git, under the CubeMap project.


Posted Image


Read More...



Camera Picking in SlimDX and Direct3D 11

Posted by , 01 September 2013 - - - - - - · 767 views
C#, SlimDX, Direct3D 11, Picking

So far, we have only been concerned with drawing a 3D scene to the 2D computer screen, by projecting the 3D positions of objects to the 2D pixels of the screen. Often, you will want to perform the reverse operation; given a pixel on the screen, which object in the 3D scene corresponds to that pixel? Probably the most common application for this kind of transformation is selecting and moving objects in the scene using the mouse, as in most modern real-time strategy games, although the concept has other applications.



The traditional method of performing this kind of object picking relies on a technique called ray-casting. We shoot a ray from the camera position through the selected point on the near-plane of our view frustum, which is obtained by converting the screen pixel location into normalized device coordinates, and then intersect the resulting ray with each object in our scene. The first object intersected by the ray is the object that is “picked.”

The code for this example is based on Chapter 16 of Frank Luna’s Introduction to 3D Game Programming with Direct3D 11.0, with some modifications. You can download the full source from my GitHub repository at https://github.com/ericrrichards/dx11.git, under the PickingDemo project.


Posted Image


Read More...






September 2013 »

S M T W T F S
1234567
891011121314
15161718192021
22232425262728
2930     

Latest Visitors



PARTNERS