Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 01 Jan 2009
Offline Last Active Mar 17 2015 10:43 AM

#4978019 Two very strange SlimDX DX11 errors

Posted by on 08 September 2012 - 10:31 AM

I'm a bit confused.
You say the error is related to 'SlimDX.Direct3D10.Device.OpenSharedResource(System.IntPtr)'
yet you change 'SlimDX.Direct3D11.Resource.FromSwapChain(swapChain, 0)'

Can you show a little more code?
Specifically the part where you obtain and access the shared resource.

#4977403 Rendering text - C#, SlimDX, DirectX11 - how?

Posted by on 06 September 2012 - 04:52 PM

Yes, I have written my own sprite renderer that handles both sprite sheets and fonts.
It handles both because if you handle fonts this way, it basically works the same way as sprite sheets.
Each font glyph is a sprite and they are all stored within the same texture, which is your sprite sheet.

You need to know where on the texture each glyph is located.
Tools such as FontBuilder provide this information for you in an additional file.
You need to parse this information and create some way to map single characters to glyph location within your code. (C# dictionary, c++ map...)

Then create a simple function that draws text.
Just split the string you want to draw into individual characters, look them up in your map, and render a screen quad using this data.

Here's some old hlsl code I had lying around:

[source lang="cpp"]Texture2D SpriteSheet : register(t0);SamplerState SpriteSampler : register(s0);cbuffer CBText : register (b0){ float2 TextureSize : packoffset(c0.x); float2 ViewportSize : packoffset(c0.z);}struct VSInput{ // Vertex Data float2 Position : POSITION0; // Instance Data float4 Color : COLOR0; float4 Source : SRCRECT; float4 Destination : DSTRECT;};struct PSInput{ float4 Position : SV_Position; float2 TexCoord : TEXCOORD; float4 Color : COLOR;};PSInput VSText(VSInput input){ PSInput output = (PSInput)0; float4 SourceRect = float4(input.Source.xy / TextureSize, input.Source.zw / TextureSize); float4 DestinationRect = float4(input.Destination.xy / ViewportSize, input.Destination.zw / ViewportSize); float2 OutPos = input.Position * DestinationRect.zw + DestinationRect.xy; OutPos = OutPos * 2.0 - 1.0; OutPos.y = -OutPos.y; output.Position = float4(OutPos, 0, 1); output.TexCoord = input.Position * SourceRect.zw + SourceRect.xy; output.Color = input.Color; return output;}float4 PSText(PSInput input) : SV_TARGET{ float4 Color = SpriteSheet.Sample(SpriteSampler, input.TexCoord); Color = Color * input.Color; Color.rgb *= Color.a; return Color;}[/source]

You render a simple fullscreen quad with these shaders. (from (0, 0) to (1, 1))
Source, Destination, TextureSize and ViewportSize are provided in pixels.
If you've never used hardware instancing before you might want to create an implementation without it first.
Just put the Color/Source/Destination instance data I have into the constant buffer for now, and make 1 draw call per character.
Remember to add some distance between characters so they're not all rendered in the same spot.

To be honest, doing this without hardware instancing is probably fine, unless you render a lot of text.
I'd only look into it if you think that it is affecting your performance, or if you want to learn a very efficient way of doing it.
Once you have this running, changing it to use hardware instancing is pretty straight forward and reduces your draw calls to 1.


Edit: Don't use a texture for each font glyph. That's a really bad idea. :D
Think about it, for each character you draw you would not only issue a draw call, but make a state change as well (change texture register).
This isn't horribly expensive, but the overhead of doing so quickly builds up if you're rendering a lot of characters.

#4976395 Rendering text - C#, SlimDX, DirectX11 - how?

Posted by on 04 September 2012 - 06:31 AM

I've never used Aaron's method, so I'm not sure what the advantages / downsides of it are, but according to his blog post I'd say it works fine.

Personally, I create a texture containing all characters for the font I want to use ahead of time.
You can do this with a program such as FontBuilder or you can write something yourself.

I then render text using hardware instancing.
I queue up the information for all characters that need to be drawn: where they need to be drawn, where they are on the texture, what color I want them in etc...
Then I build an instance buffer from that information, and render the entire text using a single draw call.

That being said, if you feel that Aaron's method is practical and it provides everything you need, go with it.
No need to make things more complicated if you have a solution that fits your needs. :)

#4937203 Problem with deferred rendering

Posted by on 03 May 2012 - 02:37 PM

Yeah, that doesn't look like a deferred rendering related issue.
Your depth buffer seems to be set up incorrectly.
Check that you have depth writing enabled and that your depth buffer is properly bound during your geometry pass.

#4929909 3d noise turbulence functions for terrain generation

Posted by on 10 April 2012 - 08:58 AM

For my masters thesis I am investigating methods to generate entire planets procedurally on the gpu.
I'm currently generating 33x33 size patches in compute shaders, and store height and normal data in a structured buffer.
I then render a single 33x33 grid using hardware instancing, by projecting it to its correct location and sampling the generated data from the structured buffer.
So far this is working very well, and it is easily fast enough to regenerate the entire geometry every frame.

Whilst this is sufficient for the thesis, using only fBm turbulence looks very boring, and not at all like actual terrain.
Unfortunately I'm having a very hard time coming up with decent turbulence functions to produce good results.

My current approach is to first generate a continent/ocean map, using 5 octaves of fBm turbulence, which produces something like this:
Posted Image

This is not bad, I can work with that as a basis.
If I add climate zones to that (equator gets more sand like textures, and poles get snow at lower height levels...).
I should also be able to generate a mountain map using the same technique with only a few octaves of noise.

But I'm at a loss as to how to generate the close up geometry.
I can't figure out how to generate any decent looking mountains at all, which are probably the most important feature.
I've experimented with ridged multifractals a lot, but this is the best I can come up with:
Posted Image
Posted Image

For which I've used the following turbulence function:
float rnoise(float3 p)
float n = 1.0f - abs(inoise(p) * 2.0);
return n*n - 0.5;
float rmf(float3 p, int octaves, float frequency, float lacunarity, float gain)
float sum = 0;
float amp = 1.0;
for(int i = 0; i < octaves; i++)
  float n = rnoise(p * frequency);
  sum += n * amp;
  frequency *= lacunarity;
  amp *= gain;
return sum;

With the parameters
float Height = pow(2.0, rmf(p, 18, 300.0, 1.75, 0.6));

And this looks pretty bad.
The placeholder textures aren't helping either, but that's another issue.
I don't really know where to go from here.
The only turbulence functions I can find are fBm and ridged multifractals.

The exception being Gilliam de Carpentier, who has a few really cool examples of using noise derivatives on his blog:

Unfortunately I get very different results with the same techniques, which I'm assuming is because he's using 2d noise and I'm using 3d noise.

I don't really know where to go from here.
Clearly I need different turbulence functions, but I don't know where to find, or how to discover new ones.
I appreciate any suggestions on the matter.


#4918929 Sean O'Neils atmospheric scattering

Posted by on 03 March 2012 - 12:43 PM

I seem to have misunderstood something rather important when reading the article, which is
that the atmosphere is supposed to be rendered behind the planet, not in front of it.
In retrospect I have no idea how I came to another conclusion.

After fixing this major issue I got much better results and have begun modifying the shaders so they can be directly applied to a gbuffer.
I currently perform all of the geometry shading in a single pass, and then
add the outer atmosphere in a second pass.
Both can be done in a single pass though, but this involves rather heavy dynamic branching.

This is where I'm currently at:
Posted Image

I had to change the
float fCameraAngle = dot(-v3Ray, v3Pos);

variable to

float fCameraAngle = 1.0f;

for ground shading though, because it would otherwise produce weird results when the camera is close to the inner radius.
I'm not really sure why that is though, but it works fine with 1.0f.

It still looks a bit odd, but that is mostly because I'm using a red planet, on whom an earth-like atmosphere feels out of place.
The sky is also quite a bit too dark. I'll have to tweak some values to get that right, but that shouldn't be too difficult.
Once I get that sorted, add some hdr, and get some work on my noise functions for the terrain generation done, it should look rather pretty.


#4907599 View and Projection Matrices for VR Window using Head Tracking

Posted by on 30 January 2012 - 06:42 AM

Unfortunately I can't seem to find the code I've written back when I posted this.

However I seem to recall most of the problems you are having.

As you can see it does not fill the screen. If I the move the camera to the right, the perspective seems to change correctly and the open face of the room stays put:-

If you want the box to fill the screen you have to align the perspective matrix with the size of your model.
For my tunnel I used a model that was 16.0f by 9.0f (my tv is 16/9)
So in order for it to fill up the screen I have to do a few things:
Posted Image

- Make sure the model origin (0, 0, 0) of the "tunnel in VR world" model is at the center of tv screen
- Position the "tunnel in VR world" model at (0, 0, 0)
- Make sure the top, right, bottom, left, top planes match the size of the model.
If your model is .38f by 3f then your calculations should be correct.

Moving the camera in Z merely alters the perceived depth of the room, it does not change its position:-

If your goal is to create some sort of VR window, then this behavior is correct.

I know that it looks weird, especially if all you have on display is an empty box.

If you add more objects to the scene, or control camera movement with the kinect,

you will notice that the effect is actually correct.

If I move the model so that it fills more of the screen (eg. Z=0.08) then move the camera in any direction, the image perspective is ok but the open face of the room has moved to the left as it appears to be rotating the room around world (0,0,0):-

You should not move the model at all!

In order to get the wanted behavior you have to line up your perspective matrix with the size of your model.

The model always stays at (0, 0, 0), the only thing that changes is your camera position.

Would your camera position directly correlate with the head position as reported by Kinect or would you be doing some manipulation to keep the distance from head to screen center constant (some kind of orbital camera movement)?

Well, here's where things get tricky.

The camera position does directly correlate with the head position, yes. (no orbital camera movement or anything like that)

You will get decent results doing this, but you will notice that something is off.

This is because the kinect camera cannot be positioned at the center of your screen.

You either have to put it below or above it, which causes the coordinate systems to no longer match up.

You can fix this by applying an offset to the camera position, which will equal the distance between the position of the

kinect camera and the center of your screen.

However this is still not 100% correct (but very close!)

The last issue is that your kinect camera is most likely angled upwards or downwards.

To get perfect results you have to take that into account as well.

Here's a video I took while I was working on it:

As you can see the effect is not entirely correct, but that is because I couldn't put the camera in front of my head.
When I did, the kinect would no longer track my head properly, so I had to put it in front of my chest.

If you can't seem to get it to work properly I can have a look at your vs project if you want me to.

#4862413 XNA - Common SurfaceFormat?

Posted by on 16 September 2011 - 05:55 AM

BGR 565 is what is commonly referred to as 16bit colors.
COLOR on the other hand is 32bit colors.

I'd say that most modern games do not provide you with the option to select the color space by yourself and just default to 32bit.

#4861784 [DX11] Tile map performance

Posted by on 14 September 2011 - 05:13 PM

There's a lot of things you can improve, though as Krohm already mentioned, I'd only worry about it if performance actually becomes an issue.
Here's a more detailed list of things you can do to (greatly) improve your performance:

  • Use a single quad (2 triangles) to render all your tiles. That's really all you need.
    Your quad should have the size of a single tile and be created at the origin of your world space.
    Whenever you draw a tile, use a vertexshader to move the quad to the appropriate location by providing a WVP matrix.

    This will reduce your total vertex count and eliminate the need to update your vertex buffer every frame.
    You can now also set the vertex buffer to default or immutable.
  • Use frustum culling.
    You only need to draw the tiles that are actually visible on screen.
  • Put all your tile images into one big texture (texture atlas).
    I understand that you might not want to manually do that yet, but you can easily have your program do it for you at startup.
    Just calculate the texture size needed hold all of your tiles and create your texture atlas using it.
    Then render every tile to your new texture atlas, and keep track of the UV location for every single tile.
    Now you can render all of your tiles using the very same texture.
    Instead of switching textures you switch UV coordinates.

    This greatly cuts down your state changes.
  • If you want even more performance, do everything in a single draw call using hardware instancing.
    You'll need to create a second (dynamic) vertexbuffer that holds the WVP matrix and UV data for every tile to be rendered.

    This will reduce your draw calls down to one.
    At this point you can easily render over 10k tiles without performance issues.

#4861504 [DX11] Tile map performance

Posted by on 14 September 2011 - 05:43 AM

When using a single texture atlas, you can also draw everything with a single draw call if you use hardware instancing.

#4819403 What happened to SetTransform?

Posted by on 04 June 2011 - 07:00 AM

Hey all,
I'm trying to learn DX11. I knew a very tiny bit of DX9 and wanted to pick up where I left off. But it's going slow. I'm making a new engine, and using bits of code from the prior engine. But I can't seem to find the equivalent of SetTransform in 11. I've got an old camera class from DX9 that I'm trying to migrate over, and that's the last bit I need...I think. Can someone give me a clue?

- Goishin

Isn't SetTransform() part of the fixed function pipeline?
DX10 and DX11 no longer support the fixed function pipeline.
You'll have to render using shaders and provide your transform matrices in a constant buffer.