Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


SmellyIrishMan

Member Since 12 Oct 2006
Offline Last Active Apr 21 2015 12:59 PM

Topics I've Started

[SharpDX] Unable to map a texture to staging buffer

19 April 2015 - 03:52 AM

So I'm trying to create an empty texture with a few mip levels, write to that texture using a compute shader, then read those values back to the application. At the moment I have created the texture and my compute shader is running and filling the mips with solid colours. When I try to map the resource though I am not able to read anything.

 

Here is the basic setup.

Texture2DDescription textureDesc;
textureDesc.Width = 64;
textureDesc.Height = 64;
textureDesc.MipLevels = 1;
textureDesc.ArraySize = 1;
textureDesc.Format = SharpDX.DXGI.Format.R16G16B16A16_Float;
textureDesc.SampleDescription.Count = 1;
textureDesc.SampleDescription.Quality = 0;
textureDesc.Usage = ResourceUsage.Default;
textureDesc.BindFlags = BindFlags.UnorderedAccess | BindFlags.ShaderResource;
textureDesc.CpuAccessFlags = CpuAccessFlags.None;
textureDesc.OptionFlags = ResourceOptionFlags.None;
SharpDX.Direct3D11.Texture2D emptyTexture = new SharpDX.Direct3D11.Texture2D(device, textureDesc);

UnorderedAccessViewDescription uavDesc = new UnorderedAccessViewDescription();
uavDesc.Format = SharpDX.DXGI.Format.R16G16B16A16_Float;
uavDesc.Dimension = UnorderedAccessViewDimension.Texture2D;
uavDesc.Texture2D.MipSlice = 0;
UnorderedAccessView uavMip0 = new UnorderedAccessView(device, emptyTexture, uavDesc);

computeShader.SetParameterResource("gOutput", uavMip0);
computeShader.SetParameterValue("fillColour", new Vector4(0.1f, 0.2f, 0.3f, 1.0f));
computeShader.Apply();
device.Dispatch(1, 64, 1);

BufferDescription bufferDesc = new BufferDescription();
bufferDesc.Usage = ResourceUsage.Staging;
bufferDesc.BindFlags = BindFlags.None;
bufferDesc.SizeInBytes = 8 * 64 * 64;
bufferDesc.CpuAccessFlags = CpuAccessFlags.Read;
bufferDesc.StructureByteStride = 8;
bufferDesc.OptionFlags = ResourceOptionFlags.None;
SharpDX.Direct3D11.Buffer localbuffer = new SharpDX.Direct3D11.Buffer(device, bufferDesc);

device.Copy(emptyTexture, 0, localbuffer, 0, SharpDX.DXGI.Format.R16G16B16A16_Float);

DataStream data = new DataStream(8 * 64 * 64, true, true);
DataBox box = ((DeviceContext)device).MapSubresource(localbuffer, MapMode.Read, MapFlags.None, out data);

Half4 value = data.ReadHalf4();

So no matter how many values I read at the end, they're always (0, 0, 0, 0). I'm not really sure where the problem lies at the moment.

 


SampleCmpLevelZero samples return 0 only

05 April 2015 - 12:30 PM

Hello!

 

It's been a long day of trying to get shadow mapping implemented. It started off pretty well and I'm in a good spot, only this little texture sample is causing me some issues. I'm trying to use SampleCmpLevelZero to get some free PCF but the function returns 0, no matter what I put into it.

 

My shadow map looks good, and if I do a simple check then everything works out pretty well. For example this piece of code comes out with some good output and shows that there is some depth information that makes sense in both the local pixel depth and the shadowMap.

float depthFromLightToThisPixel = pIn.ShadowPosH.z;
float depthFromLightToClosestPixel = gShadowMap.Sample(sam, pIn.ShadowPosH.xy).r;
float depthDiff = abs(depthFromLightToThisPixel - depthFromLightToClosestPixel);
return float4(0.9f, 0.9f, 0.9f, 1.0f) * depthDiff;

This code however, just returns 0. Fully black for all pixels. Even if I force the depthFromLightToThisPixel to be 0, 1, or any other value.

float depthFromLightToThisPixel = pIn.ShadowPosH.z;
float shadow = gShadowMap.SampleCmpLevelZero(ShadowSampler, pIn.ShadowPosH.xy, depthFromLightToThisPixel);
return float4(0.9f, 0.9f, 0.9f, 1.0f) * shadow;

I've come across a few possible issues and I think I've narrowed it down to either;

 

1.Sampler is incorrect.

2.Texture format is incorrect.

 

Here's the sampler. (BorderColor just for testing)

SamplerComparisonState ShadowSampler
{
Filter = COMPARISON_MIN_MAG_LINEAR_MIP_POINT;
AddressU = BORDER;
AddressV = BORDER;
AddressW = BORDER;
BorderColor = float4(0.5f, 0.5f, 0.5f, 1.0f);
ComparisonFunc = LESS_EQUAL;
};

And my textures are set up as follows

 

Texture2D; R24G8_Typeless

DepthStencilView; D24_UNorm_S8_UInt

ShaderResourceView; R24_UNorm_X8_Typeless

 

Now R24_UNorm_X8_Typeless should be OK to use the comparison function according to the bottom of the page here (https://msdn.microsoft.com/en-us/library/windows/desktop/ff476132%28v=vs.85%29.aspx). When I run the shader through RenderDoc I get a texture format of R24G8_Typeless but I assume that's OK and the ShaderResourceView is still doing the read in it's own format. I've attached a file showing the texture format in RenderDoc.

 

Ah. I had a thought before I left that it might be related to mips since that's what level the compare happens on. I should be generating just a single level of mip for the texture. However if I use SampleLevel in the first block of code for example;

depthFromLightToClosestPixel = gShadowMap.SampleLevel(sam, 0, pIn.ShadowPosH.xy).r;

Then I just get 1.0 returned. Solid white instead of solid black. Perhaps this is the root cause.

 

I'm pretty stumped at the moment as to what's causing this. Hopefully I haven't overlooked anything and left out any information. Thanks for reading! Time for me to eat!


ShaderResourceView sRGB format having no effect on sampler reads

15 March 2015 - 05:42 PM

Hey guys,

 

So I'm looking into gamma/linear correctness and all of that goodness but hit a bit of a hurdle tonight. I'm specifically using SharpDX at the moment but that shouldn't have much of an impact on what I'm trying to accomplish.

SharpDX.Direct3D11.ImageLoadInformation imageLoadInfo = new SharpDX.Direct3D11.ImageLoadInformation();

imageLoadInfo.BindFlags = SharpDX.Direct3D11.BindFlags.ShaderResource;
imageLoadInfo.Format = SharpDX.DXGI.Format.R8G8B8A8_UNorm_SRgb;

SharpDX.Direct3D11.Resource sRGBTexture = SharpDX.Direct3D11.Texture2D.FromFile(device, filepath, imageLoadInfo);

textureView = new SharpDX.Direct3D11.ShaderResourceView(device, sRGBTexture);

The problem is that no matter if I use R8G8B8A8_UNorm or R8G8B8A8_UNorm_SRgb, the sampler in the shader is still reading the same values. If I manually adjust the values in the shader to account for gamma with pow(sampler.sample(), 2.2) then things are back on track again. Perhaps I'm misunderstanding things but I thought that if I changed the format of the ResourceView then it should automatically apply or not apply gamma correction in the texture read.

 

For instance; https://msdn.microsoft.com/en-us/library/windows/desktop/hh972627(v=vs.85).aspx

 

Am I mistaken?


Placing particle emitters on dynamically clipped geometry

04 July 2013 - 04:35 PM

Hey Gamedev,

 

I'd like to create a reveal effect that slowly reveals the model in a scene. At the moment I have a very simple pixel shader that will reveal the model over time with a quick sin wave at the edge of the reveal for effect. The result is something like this...

cCRXgla.png

Now, there is not a lot of impact with just this for a number of reasons. One thing that I would like to add is particle at the edge of the reveal. I'm trying to think of a number of ways to execute this but I could really do with some help since I'm sure that this is something that has been done a million times before. This is basically what I would be aiming for. Please excuse the extremely enviable msPaint skills

BviSYQ3.png

I'm developing this for use in Unity but I'm really new to it and don't really know exactly what's available to me. So if you have something that works specifically for Unity that is fine, but I'm really looking for just some general pointers on how to execute something like this.

Any advice, pointers, links, reference materials are all very much appreciated.
Thanks for taking the time to read my post.


How should I do environment reveal/texture blending?

06 July 2011 - 01:20 PM

At the moment I am trying to research possible ways to go from a solid black screen and then slowly reveal/hide the world to the player. This is not a short fade in at the end of a load but would take place over the length of a "round" which would be anything from 1-20minutes long. It would also need to be quite stylistic, allowing for highly creative reveals. Imagine a model transitioning from a wireframe into a fully textured model, only it doesn't just transition from bottom to top but instead, text is slowly written or paint splatter marks are projected to reveal the texture of the model. I guess it's something along the lines of multi-texture blending similar to what is done with terrain but I'm not positive, maybe masking? Anyway...

At the moment I have 2 or 3 different ideas that are floating around in my head.

1; UV map the environment and have a regular colour texture but also have a greyscale reveal texture (Revealing the colour texture underneath), black areas would be visible immediately and then whiter areas would slowly get revealed as the game progressed. This could allow for all sorts of creative reveals. The problems I currently see with it is that I don't know if there would be discontinuity between different parts of the UV map. Also it mean a full extra texture for each environment, but I think this is reasonable and the bit-depth of the greyscale can be knocked down a little if needs be.

2; If the camera is static, then the reveal could be done in screen space but I think this would not give the most convincing effect overall. It would be good for screen transitions etc that are quick but not for a long reveal. Perspective would be quite hard to deal with. Although I suppose if the camera was static then I could take the first approach to generate the reveal texture and then take a 2D snapshot of the environment with the reveal texture to generate a screen-space reference which should save on memory, but restricts us to a static camera.

3; Do a geometry check for distance/intersection and then manipulate the edges of this using some reference image to match the concept style.

I'm avoiding a straight-up programmatic approach for now as an artistic approach would provide a much better feel.

I'm trying to find material and reference to better illustrate what I'm talking about but I'm having a hard time doing so.
Something like the following but without the bouncing/animated effects, just the spreading; Youtube
Or picture Peter Parker being taken over from Venom, Predator going from visible->invisible, etc.

If you have any references, papers, articles, videos or know the official name of the technique I'm trying to describe that would be great. I hope that I've done a reasonable job describing what I'm trying to achieve so if you have any suggestions then I would be glad to hear them.

PARTNERS