Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


DwarvesH

Member Since 12 Jun 2013
Offline Last Active Sep 15 2014 01:19 AM

Topics I've Started

Porting from DirectX9 to DirectX10 questions

05 September 2014 - 07:15 AM

Hi everybody!

 

So I am at my third or fourth attempt at porting my base application from DirectX9 to DirectX10 and while this is going better than the rest, I'm still having tons of problems.

 

I managed to get basic rendering working, but not I need to allow for the possibility of changing the device settings.

 

So here are a few questing:

 

1. How do you "recreate" the device?

 

In DX9 you could change render targets and sucks, but generally you had to reset you device.

 

In DX10 I think you can use swapChain.ResizeBuffers for simple cases, like user window resize. That fails silently for me, so I guess I need to clear up some resources.

 

I did dome Googleing and found that I need to Dispose all the resources created when the swap chain was created.

 

This works fine, but I was wondering how to manage a lot of render targets. The engine can use anywhere from 1 to around 10 for a full suite of effects. Should this be done with the swapchain or should one keep the swap chain buffer count low and manage render targets manually.

 

Secondly, how to really "Recreate" the device? Like when changing MSAA mode? All my attempts to recreate the swap chain resulted in the display driver crashing. And how to handle going from fullscreen to windowed with or without render target size change?

 

2. Effect is a lot weirder in DX10. Should I update my engine to use vertex and pixel shaders instead of effects?

 

3. Font.DrawText ruins the device context. I tried setting device.Rasterizer.State and device.OutputMerger.DepthStencilState at the beginning of the draw function but this is not enough. I temporarily fixed this by using Sprite and calling Begin/End with a parameter so it backs up the device state, but it would be better if I could set all device context variables so that rendering works without the use of Sprite.

 

4. DX9 was using BGRA colors and DX10 uses RGBA colors?


Terrain stitching micro seams

08 April 2014 - 01:16 AM

I am experiencing a weird phenomenon. 

 

I have implemented large scale terrain that is divided into chunks of various resolutions/LOD. When two different LOD chunks are near each other, I do stitching once on the CPU to fix the seam. The way I do it is it take two points from the higher resolution chunk that align up to the lower one and compute their average height. Then I set the middle vertex height to that average and apparently this fixes all discontinuities between chunks.

 

But something is amiss. It seams hat my simple average is not up on par with the way the hardware rasterizer triangles and sometimes a single pixel wide hole appears. These come and go as you move the camera around, often staying only for one frame.

 

http://dl.dropboxusercontent.com/u/45638513/ter08.png

 

In the above image I was lucky enough to capture two of the fleeting seams near each other. The red line is the LOD discontinuity. The two seams are circled in red. As I move around the camera, white pixel come and go, all found on the discontinuity line. Otherwise, when the pixels don't show up, there are no seams.

 

Any idea how to fix this? My far plane is pretty far away, may have something to do with it.

 

The only idea I have is to add extra triangles in that "seam".

 

Thank you!

 

 


Oh no, not this topic again: LH vs. RH.

03 April 2014 - 06:38 AM

So I'm porting my game over from XNA (RH) to SharpDX (LH or RH, LH traditionally)  and have finally arrived to the last phase: the terrain. Once this is done porting finally be behind me!

 

And here is where LH or RH really causes major changes. For terrain I needed to do a ton of adjustment, even having to transpose the terrain only for physics.

 

In the past I did not mind switching from RH to LH. I had to use different matrices and swap the Z in position and normals at mesh load and things where pretty much done.

 

But recently I noticed that logically LH does not not make that much sense. Everything from applying a 2D grid on top of the 3D world to mapping mathematical matrices to world positions to drawing mini maps is a lot more intuitive using RH. Especially mapping of matrices. In programming you often have an (x, y) matrix that is represented as y rows of x columns. This maps intuitively to right handed. You need to put element (0, 0) at you coordinate system's origin and map Y to Z (assuming your world is not centered; even if it is, just subtract an offset).

 

LH on the other hand is more difficult to map. Especially since you often need to transform things from one space into another in your head.

 

Are there any good reasons to use LH other than DirectX has traditionally used it and it may make integrating things from DirectX easier?

 

I'm really torn between the two. But if I switch back to RH it must be done now.


SharpDX Matrix.Forward curiosity

21 March 2014 - 03:03 AM

Now I'm proficient with the practical application of matrices in rendering, but I can't say I know exactly the meaning of all the rows an columns in different transformation matrices.

 

I was integrating a new physics engine in my SharpDX engine and I had to port the cameras from RH to LH coordinate system. Everything seems to work correctly and rendering is also correct, with the exception of the camera view matrix forward component, which is (0, 0, -1).

 

Just to be sure that I did not mes sup the RH/LH conversion, I tested my old tried and true cameras with which I had no issues ever and even there the Forward component is (0, 0, -1).

 

So in matrix theory, shouldn't my forward components be the same sign as my forward vector? I do habe my axis properly set up in my camera:

public Vector3 XAxis = new Vector3(1, 0, 0);
public Vector3 YAxis = new Vector3(0, 1, 0);
public Vector3 ZAxis = new Vector3(0, 0, 1);

Checking the SharpDX documentation, it says that Forward return  -M31, -M32, and -M33.

http://sharpdx.org/documentation/api/p-sharpdx-matrix-forward

 

Is this normal?


I love shader permutations! (not sure if ironic or not)

15 March 2014 - 11:35 AM

So I decided to support most reasonable lighting setups. And I also want them to perform at maximum speed, so run-time shader dynamic branching is out of the question.

 

So I came up with the concept of render profiles. Each combination of render profiles results in a unique pixel shader. All permutations are automatically generated.

 

Render profiles support the following settings for now:

  • Ambient mode. Can be off, flat color modulation, spherical harmonics or environment map ambient lighting. For metallic or mixed objects off ambient lighting is the same as flat, because of reasons and PBR.
  • Baked ambient AO. On or off. Baked AO only gets displayed in the ambient component because you shouldn't have AO in strong direct light.
  • SSAA mode: three settings. Off, medium and super duper ultra for now.
  • HDR mode. Currently only off and a single tone-mapping operator is supported. I'll add things like filmic later.
  • Material nature: metallic or dielectric. Or mixed, where you can lerp between metallic and dielectric.

This is pretty comprehensive. I found that a handful of render profiles are enough to render scenes.

 

The only problem is the number of permutations. With this limited setup there are 120 different techniques. I can easily see this getting over 1000. They are autogenerated so not a big problem, but I was wondering how others do this.

 

Manual shader source management is out of the question. Even a custom tailored solution that only woks with exactly what I want to render and that shader is compiled for my setup only will have dozens of permutations, so generated seems to win. The 120 techniques do occupy 100 KiB of source code and take 10 second to compile under FXC, but precompiled loads very fast.

 

So my question  for more experienced folk: is this a good approach? Half live 2 uses almost 2000 permutations, so I'm not the only one doing this. And pretty much everyone uses permutations to handle different light setups and numbers. Unless they write those fancy shaders with for loops.


PARTNERS