jdub

Members
  • Content count

    359
  • Joined

  • Last visited

Community Reputation

459 Neutral

About jdub

  • Rank
    Member
  1. The reason I ask is because I have gotten to the point where Visual Studio won't load my compute shader when trying to debug it.  It doesn't show any error.  It just hangs on "generating shader traces".  So I'm wondering if there's a debugger that would be better for complex compute shaders?
  2. I am wondering if there are any good resources which provide a comparison between Visual Studio 2013's Graphics Debugger and NVIDIA NSight with regards to their ability to debug compute shaders?   As my compute ray-tracer is getting more complex, I am noticing that it is taking Visual Studio incredibly long to "generate shader traces".  I'm wondering if this is just the cost of trying to simulate a Compute Shader on the CPU for debugging purposes or represents an actual bug with Visual Studio's Graphics debugger.  I'm hoping that NSight might be able to offer an improvement in speed.    
  3. Hmm.. It appears I misread the error message.  The result is still a little cryptic to me.  Here is the code I'm calling: HRESULT CreateStructuredBuffer( ID3D11Device *device, UINT element_size, UINT count, void *initial_data, ID3D11Buffer **out) { *out = NULL; D3D11_BUFFER_DESC desc; ZeroMemory(&desc, sizeof(D3D11_BUFFER_DESC)); desc.BindFlags = D3D11_BIND_UNORDERED_ACCESS | D3D11_BIND_SHADER_RESOURCE; desc.ByteWidth = element_size * count; desc.MiscFlags = D3D11_RESOURCE_MISC_BUFFER_STRUCTURED; desc.StructureByteStride = element_size; desc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; desc.Usage = D3D11_USAGE_DYNAMIC; if (initial_data) { D3D11_SUBRESOURCE_DATA subresource_data; subresource_data.pSysMem = &initial_data; return device->CreateBuffer(&desc, &subresource_data, out); } else { return device->CreateBuffer(&desc, NULL, out); } } And here is the error message I get when I try to call CreateBuffer: D3D11 ERROR: ID3D11Device::CreateBuffer: A D3D11_USAGE_DYNAMIC Resource cannot be bound to certain parts of the graphics pipeline, but must have at least one BindFlags bit set. The BindFlags bits (0x88) have the following settings: D3D11_BIND_STREAM_OUTPUT (0), ...
  4. I am building a ray tracer.  I have a structured buffer that holds elements containing information about the geometry/materials of my scene.  I want to be able to supply this geometry from the CPU to my Compute Shader (not through a constant buffer because there is too much Geometry data).   The way that immediately comes to mind is to create my structured buffer as a dynamic buffer and use Map()/Unmap() to write data to it.  However, apparently dynamic resources cannot be directly bound to the pipeline as shader resources.   What is a good way to do this?
  5. I am taking a graphics class at my University.  For the latest assignment, we are required to implement the functionality of a depth buffer for rendering our geometry.  However, this assignment is supposed to be implemented in OpenGL using a separate set of skeleton code.  Because I am more familiar with DirectX I would like to implement it in that API.  That being said, I am wondering if there is a way to replace the standard depth-stencil functionality of DirectX with my own code (which will do the same thing) in order to complete the requirements of the assignment?    
  6. Hello!  I am trying to write a compute shader which rasterizes a single 2D triangle.  Here is the code for the shader: cbuffer rasterizer_params : register(b0) { float3 default_color, tri_color; int num_tris; uint output_width, output_height; float3 padding; } StructuredBuffer<int2> input_vertices : register(b0); RWTexture2D<float4> output_texture : register(u0); float3 barycentric(int2 pos, int2 a, int2 b, int2 c) { float3 res; float2 v0 = pos - a; float2 v1 = b - a; float2 v2 = c - a; float d20 = dot(v2, v0); float d12 = dot(v1, v2); float d22 = dot(v2, v2); float d10 = dot(v1, v0); float d11 = dot(v1, v1); float d21 = dot(v2, v1); float denom = d22*d11 - d21*d12; res.y = (d10*d22 - d20*d21) / denom; res.z = (d20*d11 - d10*d12) / denom; res.x = 1.0f - (res.y + res.z); return res; } float3 rasterize(int2 pos, int2 vert0, int2 vert1, int2 vert2) { float3 res = barycentric(pos, vert0, vert1, vert2); if(res.x >= 0.0f && res.y >= 0.0f && res.z >= 0.0f) return tri_color; else return default_color; } [numthreads(32, 32, 1)] void CSMain(uint2 dispatch_tid : SV_DispatchThreadID) { float3 pix_color; pix_color = rasterize( int2(dispatch_tid.x, dispatch_tid.y), int2(0, 0), int2(25, 0), int2(0, 25)); output_texture[dispatch_tid.xy] = float4(pix_color.x, pix_color.y, pix_color.z, 1.0f); } The output is a completely black texture (meaning that none of the pixels are passing the rasterization test).  I've tried running through my code in the graphics debugger.  However, I've noticed that I can't read the values of a lot of variables in the code (or the appear as NaN).  I assume that this is due to the way that the shader is compiled but it makes the debugger almost useless if I can't examine the values of certain variables over the execution of my program.  What gives?
  7.   This is for a school project where the goal is to rasterize triangles.  Thus, the contents of the texture "back buffer" could change so I would rather Map/Unmap it than create a new texture every time I need to draw.   It turns out that the Map/Unmap is indeed the culprit.  I was creating the texture as a size that isn't memory aligned (1000*1000) so DirectX pads the texture to make it be aligned.  I wasn't accounting for the extra padding while writing texture data.
  8. Thanks for the response.  Perhaps I didn't clarify but the Application renders incorrectly while running in debug mode.  However, when a frame is captured inside the graphics debugger, the frame renders as it should.  
  9. I'm trying to set up a simple compute shader program which simply fills a given texture with pixel data of a certain color.  Here is my CPU side code: bool Init(void) { HRESULT res; D3D11_TEXTURE2D_DESC texture_desc; D3D11_SHADER_RESOURCE_VIEW_DESC texture_SRV_desc; D3D11_UNORDERED_ACCESS_VIEW_DESC texture_UAV_desc; texture_desc.ArraySize = 1; texture_desc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_UNORDERED_ACCESS; texture_desc.CPUAccessFlags = 0; texture_desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; texture_desc.Width = this->image_width; texture_desc.Height = this->image_height; texture_desc.MipLevels = 1; texture_desc.MiscFlags = 0; texture_desc.SampleDesc.Count = 1; texture_desc.SampleDesc.Quality = 0; texture_desc.Usage = D3D11_USAGE_DEFAULT; texture_SRV_desc.Texture2D.MipLevels = 1; texture_SRV_desc.Texture2D.MostDetailedMip = 0; texture_SRV_desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; texture_SRV_desc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D; texture_UAV_desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; texture_UAV_desc.ViewDimension = D3D11_UAV_DIMENSION_TEXTURE2D; texture_UAV_desc.Texture2D.MipSlice = 0; if (FAILED(res = this->renderer->GetDevice()->CreateTexture2D(&texture_desc, NULL, &this->texture))) { return false; } if (FAILED(res = this->renderer->GetDevice()->CreateUnorderedAccessView(this->texture, &texture_UAV_desc, &this->texture_uav))) { return false; } if (FAILED(res = this->renderer->GetDevice()->CreateShaderResourceView(this->texture, &texture_SRV_desc, &this->texture_SRV))) { return false; } if (!this->create_compute_shader()) return false; this->invoke_compute_shader(); return true; } void invoke_compute_shader(void) { ID3D11ShaderResourceView *nullSRV = { NULL }; ID3D11UnorderedAccessView *nullUAV = { NULL }; ID3D11ComputeShader *nullCShader = { NULL }; this->renderer->GetDeviceContext()->CSSetShader(this->shader, NULL, 0); this->renderer->GetDeviceContext()->CSSetUnorderedAccessViews(1, 1, &this->texture_uav, NULL); this->renderer->GetDeviceContext()->Dispatch(32, 32, 1); this->renderer->GetDeviceContext()->CSSetShaderResources(0, 1, &nullSRV); this->renderer->GetDeviceContext()->CSSetUnorderedAccessViews(0, 1, &nullUAV, 0); this->renderer->GetDeviceContext()->CSSetShader(nullCShader, 0, 0); } Here is the compute shader code itself: RWTexture2D<float4> output_texture : register(u0); [numthreads(32, 32, 1)] void CSMain( uint3 dispatch_tid : SV_DispatchThreadID ) { uint2 index = uint2(dispatch_tid.x, dispatch_tid.y);; output_texture[index] = float4(1.0f, 1.0f, 0.0f, 1.0f); } I've taken a look at the texture in Visual Studio's Resource Visualizer and it shows an empty texture.  What are my doing wrong?
  10. I am building an application that rasterizes (on the CPU) and renders 2D triangles.  In order to do this, I create a texture to hold the 2D triangles and then render it on-screen on a textured quad.  The issue is that, the triangle renders incorrectly when rendered while the application is running.  However, when I capture a frame inside the VS2013's graphics debugger, the resultant frame is rendered with the triangle appearing as I would expect it to be.  Here is code for how I create the texture holding the triangle: struct Pixel { char r, g, b, a; }; .... HRESULT res; D3D11_TEXTURE2D_DESC texture_desc; D3D11_SUBRESOURCE_DATA initial_data; D3D11_SHADER_RESOURCE_VIEW_DESC SRV_desc; Pixel default_color; this->raw_data = (char *)malloc(this->image_width*this->image_height*sizeof(Pixel)); default_color.r = 0; default_color.g = 0; default_color.b = 0; default_color.a = 255; texture_desc.ArraySize = 1; texture_desc.BindFlags = D3D11_BIND_SHADER_RESOURCE; texture_desc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; texture_desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; texture_desc.Width = this->image_width; texture_desc.Height = this->image_height; texture_desc.MipLevels = 1; texture_desc.MiscFlags = 0; texture_desc.SampleDesc.Count = 1; texture_desc.SampleDesc.Quality = 0; texture_desc.Usage = D3D11_USAGE_DYNAMIC; initial_data.pSysMem = this->raw_data; initial_data.SysMemPitch = this->image_width * sizeof(Pixel); initial_data.SysMemSlicePitch = 0; SRV_desc.Texture2D.MipLevels = 1; SRV_desc.Texture2D.MostDetailedMip = 0; SRV_desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; SRV_desc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D; for (int i = 0; i < this->image_height; i++) { for (int j = 0; j < this->image_width; j++) { memcpy(&this->raw_data[i*this->image_width*sizeof(Pixel) + j*sizeof(Pixel)], &default_color, sizeof(Pixel)); } } if (FAILED(res = this->renderer->GetDevice()->CreateTexture2D(&texture_desc, &initial_data, &this->texture))) { return false; } if (FAILED(res = this->renderer->GetDevice()->CreateShaderResourceView(this->texture, &SRV_desc, &this->texture_SRV))) { return false; } And here is where the quad is rendered: D3D11_MAPPED_SUBRESOURCE mapped_subresource; ID3D11Buffer *vBuffs = {this->quad_v_buffer}; UINT strides[] = {sizeof(TexturedVertex)}; UINT offsets[] = {0}; ID3D11SamplerState *sampler_states = { this->renderer->GetSamplerState() }; ID3D11ShaderResourceView *textureSRVs = { this->texture_SRV }; HRESULT res = this->renderer->GetDeviceContext()->Map(this->texture, 0, D3D11_MAP_WRITE_DISCARD, 0, &mapped_subresource); memcpy(mapped_subresource.pData, this->raw_data, this->image_width*this->image_height*sizeof(Pixel)); this->renderer->GetDeviceContext()->Unmap(this->texture, 0); this->renderer->BindShader(SHADER_TYPE_TEXTURE); this->renderer->SetTransform(TRANSFORM_WORLD, Matrix::Identity()); this->renderer->SetTransform(TRANSFORM_VIEW, Matrix::Identity()); this->renderer->SetTransform(TRANSFORM_PROJECTION, Matrix::Identity()); this->renderer->GetDeviceContext()->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); this->renderer->GetDeviceContext()->IASetVertexBuffers(0, 1, &vBuffs, strides, offsets); this->renderer->GetDeviceContext()->PSSetShaderResources(0, 1, &textureSRVs); this->renderer->SetCullMode(D3D11_CULL_NONE); this->renderer->GetDeviceContext()->Draw(6, 0); I should also add that the previous code works absolutely fine (renders a textured quad) when I provide a texture that I load from disk.   Attached are images of how the quad renders 2 triangles correctly (in the graphics debugger) and incorrectly (on the application window).[attachment=25657:triangle_error.jpg] [attachment=25656:triangle_correct.jpg]
  11. I am noticing that a lot of errors are cropping up in my directX application because I am setting values (depth/stencil state, raster state, etc.) when doing rendering operations and then forgetting to unset those values.  In d3d9, it appears that issues similar to this could be managed using state blocks.  I'm wonder: what is the best way to implement the same functionality in d3d11?  Are there any existing implementations in use that people find to work well?
  12. My control scheme is as followed (with respect to LH coordinate system):   W, A, S, D:  Translate camera along standard axes (local to camera space). Q, E: Rotate camera left, right respectively.  Mouse Movement: Behave in a manner similar to that of most FPS games.   The system seems to work fine at trivial angles.  However, I have noticed that if the camera is rotated, my translations (A,D) do not translate in the correct direction.  In addition, with enough use of the FPS-like mouse control system, rotations begin to break down.  Here is my code:   Camera class: void Camera3D::Translate(Vector3 v) { this->translation += v; } void Camera3D::SetRotationAxisAngle(Vector3 v, double angle) { Matrix new_rot = Matrix::CreateFromAxisAngle(v, angle); this->rotation = new_rot; } void Camera3D::RotateAxisAngle(Vector3 v, double angle) { Matrix new_rot = Matrix::CreateFromAxisAngle(v, angle); this->rotation *= new_rot; } void Camera3D::buildView(void) { this->view = Matrix::Identity(); this->view.m[3][0] = -this->translation.x; this->view.m[3][1] = -this->translation.y; this->view.m[3][2] = -this->translation.z; this->view *= this->rotation; this->invView = this->view.Invert(); } Input Control code: void InputController::Update(double ms) { float dist = (ms / 1000.0)*this->cameraMoveSpeed; Matrix view = this->renderer->GetCamera().GetView(); Vector3 axis; switch (this->motionState) { case MOTION_STATE_FORWARD: this->renderer->GetCamera().Translate(view.Forward() * dist); break; case MOTION_STATE_BACK: this->renderer->GetCamera().Translate(view.Backward() * dist); break; case MOTION_STATE_LEFT: this->renderer->GetCamera().Translate(view.Left() * dist); break; case MOTION_STATE_RIGHT: axis = view.Right(); this->renderer->GetCamera().Translate(view.Right() * dist); break; case MOTION_STATE_ROT_RIGHT: this->renderer->GetCamera().RotateAxisAngle(view.Forward(), 0.01f); break; case MOTION_STATE_ROT_LEFT: this->renderer->GetCamera().RotateAxisAngle(view.Forward(), -0.01f); break; } } void InputController::FPSMouseUpdate(int x, int y) { if(x == 0 && y == 0) { return; } this->mouseDeltaX = x; this->mouseDeltaY = y; Vector3 mouseDelta(-this->mouseDeltaX, this->mouseDeltaY, 0.0f); Matrix view = this->renderer->GetCamera().GetView(); this->renderer->GetCamera().RotateAxisAngle(view.Right(), mouseDamping*this->mouseDeltaY); this->renderer->GetCamera().RotateAxisAngle(view.Up(), mouseDamping*this->mouseDeltaX); }
  13. I apologize if the title is somewhat misleading but I was unsure what to title this post.  I am trying to develop somewhat of a flight simulator type camera system.  Ideally, no matter what the orientation of the camera is relative to the world, if the user presses the "travel right" key, the camera should appear to travel right from their point of view (regardless of what it's orientation in world-space is).  I have written some code that attempts perform an inverse transformation of directions relative to the camera back into world-space before rotating the camera.  I should add that I am using the DirectXTK library for its math functions.  Here is my code: user_input_function(INPUT input_type) { switch(input_type) { .... case MOTION_STATE_ROT_RIGHT: axis = __transform_local(Vector3(0, 0, 1)); new_transform = cur_transform * Matrix::CreateFromAxisAngle(axis, -0.1f); this->renderer->SetTransform(TRANSFORM_VIEW, new_transform); break; } } Vector3 __transform_local(Vector3 &v) { Matrix inv_view = this->renderer->GetInvTransform(TRANSFORM_VIEW); return Vector3::TransformNormal(v, inv_view); } When I run my program, the rotations produced are incorrect when attempting to rotate around an axis that is not strictly X, Y or Z (relative to the camera).
  14. That command didn't work.  I'm in the Ubuntu terminal.
  15. Hi guys.  I downloaded glfw-3.0.4 from the glfw site and I am trying to follow their instructions to build the project from source.  However, when I type 'cmake .' in the glfw-3.0.4 directory, I get the following errors:   CMake Error: Cannot determine link language for target "glfw". CMake Error: CMake can not determine linker language for target:glfw   Now it's very obvious what's causing these errors but I am wondering why they are cropping up if I am following the build directions that are posted on the GLFW site?