Jump to content

  • Log In with Google      Sign In   
  • Create Account

Calling all IT Pros from Canada and Australia.. we need your help! Support our site by taking a quick sponsored surveyand win a chance at a $50 Amazon gift card. Click here to get started!


jdub

Member Since 01 Aug 2007
Offline Last Active Mar 24 2015 06:45 PM

Topics I've Started

NVIDIA NSight vs Visual Studio 2013 Graphics Debugger?

04 March 2015 - 06:59 PM

I am wondering if there are any good resources which provide a comparison between Visual Studio 2013's Graphics Debugger and NVIDIA NSight with regards to their ability to debug compute shaders?

 

As my compute ray-tracer is getting more complex, I am noticing that it is taking Visual Studio incredibly long to "generate shader traces".  I'm wondering if this is just the cost of trying to simulate a Compute Shader on the CPU for debugging purposes or represents an actual bug with Visual Studio's Graphics debugger.  I'm hoping that NSight might be able to offer an improvement in speed.

 

 


Moving Data from CPU to a Structured Buffer

03 March 2015 - 07:48 PM

I am building a ray tracer.  I have a structured buffer that holds elements containing information about the geometry/materials of my scene.  I want to be able to supply this geometry from the CPU to my Compute Shader (not through a constant buffer because there is too much Geometry data).

 

The way that immediately comes to mind is to create my structured buffer as a dynamic buffer and use Map()/Unmap() to write data to it.  However, apparently dynamic resources cannot be directly bound to the pipeline as shader resources.  

What is a good way to do this?


Writing my own Depth Buffer Functionality.

20 February 2015 - 03:15 PM

I am taking a graphics class at my University.  For the latest assignment, we are required to implement the functionality of a depth buffer for rendering our geometry.  However, this assignment is supposed to be implemented in OpenGL using a separate set of skeleton code.  Because I am more familiar with DirectX I would like to implement it in that API.  That being said, I am wondering if there is a way to replace the standard depth-stencil functionality of DirectX with my own code (which will do the same thing) in order to complete the requirements of the assignment?

 

 


What is wrong with my compute shader?

07 February 2015 - 03:48 PM

Hello!  I am trying to write a compute shader which rasterizes a single 2D triangle.  Here is the code for the shader:

cbuffer rasterizer_params : register(b0)
{
    float3 default_color, tri_color;
    int num_tris;
    uint output_width, output_height;
	float3 padding;
}

StructuredBuffer<int2> input_vertices : register(b0);
RWTexture2D<float4> output_texture : register(u0);

float3 barycentric(int2 pos, int2 a, int2 b, int2 c)
{
    float3 res; 

    float2 v0 =  pos - a;
    float2 v1 = b - a;
    float2 v2 = c - a;
    
    float d20 = dot(v2, v0);
    float d12 = dot(v1, v2);
    float d22 = dot(v2, v2);
    float d10 = dot(v1, v0);
    float d11 = dot(v1, v1);
    float d21 = dot(v2, v1);

    float denom = d22*d11 - d21*d12;

    res.y = (d10*d22 - d20*d21) / denom;
    res.z = (d20*d11 - d10*d12) / denom;
    res.x = 1.0f - (res.y + res.z);
    return res;
}

float3 rasterize(int2 pos, int2 vert0, int2 vert1, int2 vert2)
{
    float3 res = barycentric(pos, vert0, vert1, vert2);
    
    if(res.x >= 0.0f && res.y >= 0.0f && res.z >= 0.0f)
        return tri_color;
    else
        return default_color;
}

[numthreads(32, 32, 1)]
void CSMain(uint2 dispatch_tid : SV_DispatchThreadID)
{
	float3 pix_color;

	pix_color = rasterize(
		int2(dispatch_tid.x, dispatch_tid.y),
		int2(0, 0),
		int2(25, 0),
		int2(0, 25));


	output_texture[dispatch_tid.xy] = float4(pix_color.x, pix_color.y, pix_color.z, 1.0f);
}

The output is a completely black texture (meaning that none of the pixels are passing the rasterization test).  I've tried running through my code in the graphics debugger.  However, I've noticed that I can't read the values of a lot of variables in the code (or the appear as NaN).  I assume that this is due to the way that the shader is compiled but it makes the debugger almost useless if I can't examine the values of certain variables over the execution of my program.  What gives?


Compute Shader won't Fill Texture

26 January 2015 - 09:35 PM

I'm trying to set up a simple compute shader program which simply fills a given texture with pixel data of a certain color.  Here is my CPU side code:

bool Init(void)
{
	HRESULT res;
	D3D11_TEXTURE2D_DESC texture_desc;
	D3D11_SHADER_RESOURCE_VIEW_DESC texture_SRV_desc;
	D3D11_UNORDERED_ACCESS_VIEW_DESC texture_UAV_desc;


	texture_desc.ArraySize = 1;
	texture_desc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_UNORDERED_ACCESS;
	texture_desc.CPUAccessFlags = 0;
	texture_desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
	texture_desc.Width = this->image_width;
	texture_desc.Height = this->image_height;
	texture_desc.MipLevels = 1;
	texture_desc.MiscFlags = 0;
	texture_desc.SampleDesc.Count = 1; 
	texture_desc.SampleDesc.Quality = 0;
	texture_desc.Usage = D3D11_USAGE_DEFAULT;

	texture_SRV_desc.Texture2D.MipLevels = 1;
	texture_SRV_desc.Texture2D.MostDetailedMip = 0;
	texture_SRV_desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
	texture_SRV_desc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;

	texture_UAV_desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
	texture_UAV_desc.ViewDimension = D3D11_UAV_DIMENSION_TEXTURE2D;
	texture_UAV_desc.Texture2D.MipSlice = 0;
	
	if (FAILED(res = this->renderer->GetDevice()->CreateTexture2D(&texture_desc, NULL, &this->texture)))
	{
		return false;
	}
	
	if (FAILED(res = this->renderer->GetDevice()->CreateUnorderedAccessView(this->texture, &texture_UAV_desc, &this->texture_uav)))
	{
		return false;
	}

	if (FAILED(res = this->renderer->GetDevice()->CreateShaderResourceView(this->texture, &texture_SRV_desc, &this->texture_SRV)))
	{
		return false;
	}

	if (!this->create_compute_shader())
		return false;

	
	this->invoke_compute_shader();

	return true;
}

void invoke_compute_shader(void)
{
	ID3D11ShaderResourceView *nullSRV = { NULL };
	ID3D11UnorderedAccessView *nullUAV = { NULL };
	ID3D11ComputeShader *nullCShader = { NULL };

	this->renderer->GetDeviceContext()->CSSetShader(this->shader, NULL, 0);

	this->renderer->GetDeviceContext()->CSSetUnorderedAccessViews(1, 1, &this->texture_uav, NULL);

	this->renderer->GetDeviceContext()->Dispatch(32, 32, 1);

	this->renderer->GetDeviceContext()->CSSetShaderResources(0, 1, &nullSRV);
	this->renderer->GetDeviceContext()->CSSetUnorderedAccessViews(0, 1, &nullUAV, 0);
	this->renderer->GetDeviceContext()->CSSetShader(nullCShader, 0, 0);
}


Here is the compute shader code itself:

RWTexture2D<float4> output_texture : register(u0);

[numthreads(32, 32, 1)]
void CSMain( uint3 dispatch_tid : SV_DispatchThreadID )
{
	uint2 index = uint2(dispatch_tid.x, dispatch_tid.y);;
	output_texture[index] = float4(1.0f, 1.0f, 0.0f, 1.0f);
}

I've taken a look at the texture in Visual Studio's Resource Visualizer and it shows an empty texture.  What are my doing wrong?


PARTNERS