• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By ucfchuck
      I am feeding in 16 bit unsigned integer data to process in a compute shader and i need to get a standard deviation.
      So I read in a series of samples and push them into float arrays
      float vals1[9], vals2[9], vals3[9], vals4[9]; int x = 0,y=0; for ( x = 0; x < 3; x++) { for (y = 0; y < 3; y++) { vals1[3 * x + y] = (float) (asuint(Input1[threadID.xy + int2(x - 1, y - 1)].x)); vals2[3 * x + y] = (float) (asuint(Input2[threadID.xy + int2(x - 1, y - 1)].x)); vals3[3 * x + y] = (float) (asuint(Input3[threadID.xy + int2(x - 1, y - 1)].x)); vals4[3 * x + y] = (float) (asuint(Input4[threadID.xy + int2(x - 1, y - 1)].x)); } } I can send these values out directly and the data is as expected

                             
      Output1[threadID.xy] = (uint) (vals1[4] ); Output2[threadID.xy] = (uint) (vals2[4] ); Output3[threadID.xy] = (uint) (vals3[4] ); Output4[threadID.xy] = (uint) (vals4[4] ); however if i do anything to that data it is destroyed.
      If i add a
      vals1[4] = vals1[4]/2; 
      or a
      vals1[4] = vals[1]-vals[4];
      the data is gone and everything comes back 0.
       
       
      How does one go about converting a uint to a float and performing operations on it and then converting back to a rounded uint?
    • By fs1
      I have been trying to see how the ID3DInclude, and how its methods Open and Close work.
      I would like to add a custom path for the D3DCompile function to search for some of my includes.
      I have not found any working example. Could someone point me on how to implement these functions? I would like D3DCompile to look at a custom C:\Folder path for some of the include files.
      Thanks
    • By stale
      I'm continuing to learn more about terrain rendering, and so far I've managed to load in a heightmap and render it as a tessellated wireframe (following Frank Luna's DX11 book). However, I'm getting some really weird behavior where a large section of the wireframe is being rendered with a yellow color, even though my pixel shader is hard coded to output white. 

      The parts of the mesh that are discolored changes as well, as pictured below (mesh is being clipped by far plane).

      Here is my pixel shader. As mentioned, I simply hard code it to output white:
      float PS(DOUT pin) : SV_Target { return float4(1.0f, 1.0f, 1.0f, 1.0f); } I'm completely lost on what could be causing this, so any help in the right direction would be greatly appreciated. If I can help by providing more information please let me know.
    • By evelyn4you
      Hello,
      i try to implement voxel cone tracing in my game engine.
      I have read many publications about this, but some crucial portions are still not clear to me.
      At first step i try to emplement the easiest "poor mans" method
      a.  my test scene "Sponza Atrium" is voxelized completetly in a static voxel grid 128^3 ( structured buffer contains albedo)
      b. i dont care about "conservative rasterization" and dont use any sparse voxel access structure
      c. every voxel does have the same color for every side ( top, bottom, front .. )
      d.  one directional light injects light to the voxels ( another stuctured buffer )
      I will try to say what i think is correct ( please correct me )
      GI lighting a given vertecie  in a ideal method
      A.  we would shoot many ( e.g. 1000 ) rays in the half hemisphere which is oriented according to the normal of that vertecie
      B.  we would take into account every occluder ( which is very much work load) and sample the color from the hit point.
      C. according to the angle between ray and the vertecie normal we would weigth ( cosin ) the color and sum up all samples and devide by the count of rays
      Voxel GI lighting
      In priciple we want to do the same thing with our voxel structure.
      Even if we would know where the correct hit points of the vertecie are we would have the task to calculate the weighted sum of many voxels.
      Saving time for weighted summing up of colors of each voxel
      To save the time for weighted summing up of colors of each voxel we build bricks or clusters.
      Every 8 neigbour voxels make a "cluster voxel" of level 1, ( this is done recursively for many levels ).
      The color of a side of a "cluster voxel" is the average of the colors of the four containing voxels sides with the same orientation.

      After having done this we can sample the far away parts just by sampling the coresponding "cluster voxel with the coresponding level" and get the summed up color.
      Actually this process is done be mip mapping a texture that contains the colors of the voxels which places the color of the neighbouring voxels also near by in the texture.
      Cone tracing, howto ??
      Here my understanding is confus ?? How is the voxel structure efficiently traced.
      I simply cannot understand how the occlusion problem is fastly solved so that we know which single voxel or "cluster voxel" of which level we have to sample.
      Supposed,  i am in a dark room that is filled with many boxes of different kind of sizes an i have a pocket lamp e.g. with a pyramid formed light cone
      - i would see some single voxels near or far
      - i would also see many different kind of boxes "clustered voxels" of different sizes which are partly occluded
      How do i make a weighted sum of this ligting area ??
      e.g. if i want to sample a "clustered voxel level 4" i have to take into account how much per cent of the area of this "clustered voxel" is occluded.
      Please be patient with me, i really try to understand but maybe i need some more explanation than others
      best regards evelyn
       
       
    • By Endemoniada

      Hi guys, when I do picking followed by ray-plane intersection the results are all wrong. I am pretty sure my ray-plane intersection is correct so I'll just show the picking part. Please take a look:
       
      // get projection_matrix DirectX::XMFLOAT4X4 mat; DirectX::XMStoreFloat4x4(&mat, projection_matrix); float2 v; v.x = (((2.0f * (float)mouse_x) / (float)screen_width) - 1.0f) / mat._11; v.y = -(((2.0f * (float)mouse_y) / (float)screen_height) - 1.0f) / mat._22; // get inverse of view_matrix DirectX::XMMATRIX inv_view = DirectX::XMMatrixInverse(nullptr, view_matrix); DirectX::XMStoreFloat4x4(&mat, inv_view); // create ray origin (camera position) float3 ray_origin; ray_origin.x = mat._41; ray_origin.y = mat._42; ray_origin.z = mat._43; // create ray direction float3 ray_dir; ray_dir.x = v.x * mat._11 + v.y * mat._21 + mat._31; ray_dir.y = v.x * mat._12 + v.y * mat._22 + mat._32; ray_dir.z = v.x * mat._13 + v.y * mat._23 + mat._33;  
      That should give me a ray origin and direction in world space but when I do the ray-plane intersection the results are all wrong.
      If I click on the bottom half of the screen ray_dir.z becomes negative (more so as I click lower). I don't understand how that can be, shouldn't it always be pointing down the z-axis ?
      I had this working in the past but I can't find my old code
      Please help. Thank you.
  • Advertisement
  • Advertisement
Sign in to follow this  

DX11 Texture not filling the entire render target view on resizing

This topic is 879 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi 

I'm trying to implement the resizing of render target when window/control is resized.

However when doing so it is not working as expected (maybe cause i'm not doing it correctly) as the rendered texture is not filling my entire render target view.

Now, when ever the window is resized, i reset my render target view and any other render target (texture) [Please see code below]
 

 this.ImgSource.SetRenderTargetDX11(null);

            Disposer.SafeDispose(ref this.m_RenderTargetView);
            Disposer.SafeDispose(ref this.m_d11Factory);
            Disposer.SafeDispose(ref this.RenderTarget);

            int width = (int)sizeInfo.Width;
            int height = (int)sizeInfo.Height;

            Texture2DDescription colordesc = new Texture2DDescription
            {
                BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource,
                Format = PIXEL_FORMAT,
                Width = width,
                Height = height,
                MipLevels = 1,
                SampleDescription = new SampleDescription(1, 0),
                Usage = ResourceUsage.Default,
                OptionFlags = ResourceOptionFlags.Shared,
                CpuAccessFlags = CpuAccessFlags.None,
                ArraySize = 1

            };

            this.RenderTarget = new Texture2D(this.Device, colordesc);
            m_RenderTargetView = new RenderTargetView(this.Device, this.RenderTarget);
            
            m_depthStencil = CreateTexture2D(this.Device, width, height, BindFlags.DepthStencil, Format.D24_UNorm_S8_UInt);

            m_depthStencilView = new DepthStencilView(this.Device, m_depthStencil);
                        
            Device.ImmediateContext.Rasterizer.SetViewport(0, 0, width, height, 0.0f, 1.0f);
            Device.ImmediateContext.OutputMerger.SetTargets(m_depthStencilView, m_RenderTargetView);

            SetShaderAndVertices(sizeInfo);

Also my texture data is updated from another thread via mapping the bitmap data to my render target.

 

Note: Texture fills the entire render target view if the mapped image is of same size as my rendered target view.

Please see screen dumps below:

1. When mapped image and render target view are of same dimensions.
[attachment=29725:Capture_SameSizeTexture.JPG]

 

 

2. When mapped image and render target view are not of same dimension
[attachment=29726:Capture_DifferentSizeTexture.JPG]

 

Thus the above screen-dump highlights my issue.

How would i approach this in order to resolve so that the no matter the dimensions of the mapped image my render target view is always filled with it.

Any suggestions ?

 

PS: Using C# , SharpDx with Directx11 and D3DImage && not using Swapchains

Thanks.

Edited by dave09cbank

Share this post


Link to post
Share on other sites
Advertisement

Does it matter if this is stretched? I guess you are rendering this texture to a quad? Can you provide some numbers to go with those images, e.g. width/height of texture and render target in the first image and width/height of both texture and render target in the second image.

 

Are portions chopped off in the first image (bottom and right side) or is that a normal image? Numbers will definitely help understand what's going on here and I suspect the solution will be nice and simple too.

 

Device.ImmediateContext.Rasterizer.SetViewport(0, 0, width, height, 0.0f, 1.0f);

 

The viewport sets what parts actually get rendered too, perhaps this is what is causing parts to not be rendered.

 

It might also be useful to see this:

SetShaderAndVertices(sizeInfo);

Edited by Nanoha

Share this post


Link to post
Share on other sites

thanks for the reply Nanoha.

 

As requested i have provided all the information below.

SetShaderAndVertices method
 

 protected void SetShaderAndVertices(Size rendersize)
        {
            var device = this.Device;
            var context = device.ImmediateContext;

            ShaderBytecode shaderCode = GetShaderByteCode(eEffectType.Texture);
            layout = new InputLayout(device, shaderCode, new[] {
                   new InputElement("SV_Position", 0, Format.R32G32B32A32_Float, 0, 0),
                    new InputElement("TEXCOORD", 0, Format.R32G32_Float, 32, 0),
            });

            // Write vertex data to a datastream
            var stream = new DataStream(Utilities.SizeOf<VertexPositionTexture>() * 6, true, true);

            int iWidth = (int)rendersize.Width;
            int iHeight = (int)rendersize.Height;

            float top = iWidth / 2;
            float bottom = iHeight / 2;

            stream.WriteRange(new[]
                                 {
                            new VertexPositionTexture(
                                        new Vector4(-top, bottom, 0.5f, 1.0f), // position top-left
                                        new Vector2(0f,0f)
                                        ),
                            new VertexPositionTexture(
                                        new Vector4(top, bottom, 0.5f, 1.0f), // position top-right
                                        new Vector2(iWidth,iHeight)
                                        ),
                            new VertexPositionTexture(
                                        new Vector4(-top, -bottom, 0.5f, 1.0f), // position bottom-left
                                         new Vector2(iWidth,iHeight)
                                        ),
                            new VertexPositionTexture(
                                        new Vector4(-top, -bottom, 0.5f, 1.0f), // position bottom-right
                                        new Vector2(iWidth,0f)
                                        ),
                            new VertexPositionTexture(
                                        new Vector4(top, -bottom, 0.5f, 1.0f), // position bottom-right
                                         new Vector2(iWidth,iHeight)
                                        ),
                            new VertexPositionTexture(
                                        new Vector4(top, bottom, 0.5f, 1.0f), // position top-right
                                        new Vector2(0f, iHeight)
                                        ),
                                  });
            stream.Position = 0;

            // Instantiate VertexPositionTexture buffer from vertex data
            // 
            vertices = new SharpDX.Direct3D11.Buffer(device, stream, new BufferDescription()
            {
                BindFlags = BindFlags.VertexBuffer,
                CpuAccessFlags = CpuAccessFlags.None,
                OptionFlags = ResourceOptionFlags.None,
                SizeInBytes = Utilities.SizeOf<VertexPositionTexture>() * 6,
                Usage = ResourceUsage.Default,
                StructureByteStride = 0
            });
            stream.Dispose();

            // Prepare All the stages
            // for primitive topology https://msdn.microsoft.com/en-us/library/bb196414.aspx#ID4E2BAC
            context.InputAssembler.InputLayout = (layout);
            context.InputAssembler.PrimitiveTopology = (PrimitiveTopology.TriangleStrip);
            context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(vertices, Utilities.SizeOf<VertexPositionTexture>(), 0));

            context.OutputMerger.SetTargets(m_RenderTargetView);
        }

shader file: 

Texture2D ShaderTexture : register(t0);
SamplerState Sampler : register(s0);

cbuffer PerObject: register(b0)
{
	float4x4 WorldViewProj;
};


// ------------------------------------------------------
// A shader that accepts Position and Texture
// ------------------------------------------------------

struct VertexShaderInput
{
	float4 Position : SV_Position;
	float2 TextureUV : TEXCOORD0;
};

struct VertexShaderOutput
{
	float4 Position : SV_Position;
	float2 TextureUV : TEXCOORD0;
};

VertexShaderOutput VSMain(VertexShaderInput input)
{
	VertexShaderOutput output = (VertexShaderOutput)0;

	output.Position = input.Position;
	output.TextureUV = input.TextureUV;

	return output;
}

float4 PSMain(VertexShaderOutput input) : SV_Target
{
	return ShaderTexture.Sample(Sampler, input.TextureUV);
}

// ------------------------------------------------------
// A shader that accepts Position and Color
// ------------------------------------------------------

struct ColorVS_IN
{
	float4 pos : SV_Position;
	float4 col : COLOR;
};

struct ColorPS_IN
{
	float4 pos : SV_Position;
	float4 col : COLOR;
};

ColorPS_IN ColorVS(ColorVS_IN input)
{
	ColorPS_IN output = (ColorPS_IN)0;
	output.pos = input.pos;
	output.col = input.col;
	return output;
}

float4 ColorPS(ColorPS_IN input) : SV_Target
{
	return input.col;
}

// ------------------------------------------------------
// Techniques
// ------------------------------------------------------

technique11 Color
{
	pass P0
	{
		SetGeometryShader(0);
		SetVertexShader(CompileShader(vs_5_0, ColorVS()));
		SetPixelShader(CompileShader(ps_5_0, ColorPS()));
	}
}

technique11 TextureLayer
{
	pass P0
	{
		SetGeometryShader(0);
		SetVertexShader(CompileShader(vs_5_0, VSMain()));
		SetPixelShader(CompileShader(ps_5_0, PSMain()));
	}
}

It would depend if i wish to keep aspect ratio or not.

Yes there is data chopped of in the first image as noticed by yourself.

 

Size for first image:
---- Display Image size (835, 626) on render target of Size(720, 576)

 

Size for 2nd image:
 Display Image size (899, 674) on render target of Size(899, 676)

Any more information then do let me know and i will happily [provide.

Thanks.
 

Edited by dave09cbank

Share this post


Link to post
Share on other sites

Is 'sizeInfo' the size of your texture or of the render target, it is being used to create the render target so I assume the latter but if it is the texture size then that will explain something.

 

I am some what at a loss but judging by the numbers you provided it looks like the render target numbers are being used to create the view but the original texture size is being used to create everything else, certainly from the 1st  image numbers as it looks like the right and bottom edge are just outside of the view. The second image numbers dispute that theory a little though :/

Edited by Nanoha

Share this post


Link to post
Share on other sites

i might have have not named the vaiables correctly.

top and bottom should be named as quadrant width and height as we use them in creating the vector4 vertex.

the axis are drawn with center of 0,0 with a widthand height which gives us our quadrant width(top variable ) and quadrant height(bottom variable).

Sorry for the confusion it has caused.

(-w, +h)                    (+w, +h)
 -------------------------
|                               |
|                               |
|............ 0,0.............|
|                               |
|                               |
|                               |
 -------------------------
(-w, -h)                    (+w, -h)

Hope this helps.



 

Edited by dave09cbank

Share this post


Link to post
Share on other sites


How would i approach this in order to resolve so that the no matter the dimensions of the mapped image my render target view is always filled with it.

Any suggestions ?

If you draw a quad that will be textured with the image straight in the projection space (-1/1x,-1/1y) it will stretch to whatever screen is, of course breaking the ratio. Since texture coordinates of verticies of the screen aligned quad do not need to change when texture dimensions change, always keep them zero/one, and reposition quad verticies in projection space, to see the image in real ratio, zoom, etc.

 

Projection space relation to the very screen is x positive right, y positive up, (0,0) at the center of screen, (-1.0,1.0) being left top corner, this applies to also z axis.

 

Read screen width/height, read image width/height and use those for pixel precise positioning, to translate into projection space.

Share this post


Link to post
Share on other sites

Thanks for the information @JohnnyCode. This has been really usefuland have managed to get the resizing working (although not maintaining the aspect ratio).

Also, when you  mention 

 

 

Since texture coordinates of verticies of the screen aligned quad do not need to change when texture dimensions change, always keep them zero/one, and reposition quad verticies in projection space, to see the image in real ratio, zoom, etc.

does that mean i need to re-position the quad vertices if I wish to implement letter-boxing technique when re-sizing but maintain the aspect ratio ?

I did tried to set the view port but it actually doesn't draw the texture properly but rather looks as if we are zoomed into the texture like as if texture is stretched over a smaller region of the texture as shown below: 

[attachment=29759:Capture_StetchedTexture.JPG]



Any ideas as to why this would happen ?

Share this post


Link to post
Share on other sites


I did tried to set the view port but it actually doesn't draw the texture properly but rather looks as if we are zoomed into the texture like as if texture is stretched over a smaller region of the texture as shown below:

What you see on picture are not correctly assigned texture coordinates on respective corners (you have x txc value three times the same or something like that).

Share this post


Link to post
Share on other sites

ok so whenever i change the viewport dimensions and location... Do i need to update the texture co-ordinates as well ?
 

Also i have added my sample text program which i have been using ... just for reference.

 

Download here

Edited by dave09cbank

Share this post


Link to post
Share on other sites


ok so whenever i change the viewport dimensions and location... Do i need to update the texture co-ordinates as well ?

No you don't, just map complete texture onto the quad, this is what texture space looks like

20100531_DX_OpenGL.png

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement