Jump to content
  • Advertisement

Tim Coolman

Member
  • Content Count

    24
  • Joined

  • Last visited

Everything posted by Tim Coolman

  1. In my DirectX 11 application, I would like to draw a scene consisting of many textured quads. For the sake of efficiency, my first thought was to use instancing to pull this off in a single draw call - four common vertices and an instance buffer containing transformation matrices to handle positioning of each instance, and an index for which texture to sample from. I had hoped I could do this using a single Texture2DArray resource for storing my collection of textures, but the textures all vary in size (though would share the same format). This does not appear to be possible with a Texture2DArray.   I would really like to avoid a separate draw call for each of these quads. From what I understand there is overhead involved in draw calls that can create a CPU bottleneck, especially considering I would only be drawing two triangles per call.   Anyone have suggestions on the most efficient way to do this?
  2. Tim Coolman

    Drawing many textured quads at once

    Thanks unbird and menohack for your suggestions.The texture atlas may be problematic because these aren't static textures that I can layout into an atlas resource in advance - these textures are first rendered by prior Draw calls and may be redrawn frequently. Basically I am drawing many things to these off-screen textures and then compositing them to the screen as quads, which I would like to do with instancing. I will consider the Texture2DArray suggestion using the largest texture for the dimensions.
  3. Tim Coolman

    Drawing many textured quads at once

      Thanks for the suggestion. It would be possible, but I'm still hoping for a more straightforward solution.
  4. I have recently been playing around with XNA for the first time. I have some experience with DirectX 10 and 11, and have also gone through some modeling tutorials for Blender. But this is the first time I've tried to import a model created in Blender.   In the project I'm experimenting with, I am drawing a jet model provided in a Microsoft example, and a simple house model I created in Blender and exported to a .x file. The problem I'm having is the perspective of the house is opposite as it should be, relative to the camera. If the house model is in the center of the viewing area, it looks fine - all I see is the front surface of the model. As the model moves to the right of the camera (translation only, no rotation applied), I should begin to see some of the side of the model that is closest to the camera. Instead the opposite side becomes visible. The same happens with up and down movement.   The jet model behaves correctly, but I'm using the same view and projection matrix for both models.   Here are some screenshots to demonstrate what I'm talking about. Hard to tell with the jet, but the issue with the house is pretty clear. Just looking for some tips as to why this might happen? Hard for me to understand how the model could be the problem, but since I'm using the same matrices for both models, I feel like there must be something wrong with the way I exported the model or something. Thanks in advance for any time given to help me out!   [attachment=14031:1.png][attachment=14032:2.png][attachment=14033:3.png][attachment=14034:4.png][attachment=14035:5.png][attachment=14036:6.png]
  5. Tim Coolman

    Model perspective issue in XNA

      You were right! That line appeared to do the trick. Any idea if there is a way to take care of this on the export from Blender so I don't have to modify the cull mode?
  6. I am writing Windows DirectX 11 software in C++ for which I would like to receive input from both a regular keyboard and a 10-key keypad. I would like a secondary user to be able to input from a 10-key keypad without disrupting the use of the full keyboard by the primary user. For example, if the primary user is typing into a text box, I would like the secondary user to be able to send 10-key data to the software to be handled separately so it does not affect the text box input. I am currently using DirectInput for both mouse and keyboard. But if anyone knows of a solution through the Windows API, I would consider that as well. When I create my keyboard device in DirectInput, I am currently using the GUID_SysKeyboard value, which lumps both keyboards into one so that my software can't discern the source of keyboard input. Is it possible to use EnumDevices to identify the two keyboards and create separate DirectInput devices? I imagine it would be, but I'm not sure how to go about identifying each device from the DIDEVICEINSTANCE structure provided to the EnumDevices callback. I would like to make this as generic as possible so it can be used with different combinations and models/brands of keyboards. Thanks in advance for any help or suggestions! (Note: I posted this same question on StackOverflow)
  7. Tim Coolman

    Separate input for additional 10-key keypad

    I'll bump this once just because I posted this topic late on a Friday afternoon. Anyone have any ideas on this?
  8. I posted this question to the nVidia developer forum under NSight Visual Studio, and I got this response from a moderator. Debugging DirectCompute shaders is the similar process as to debugging any other shader. Please take a look at the user's guide, under Graphics Debugger > Shader Debugger.[/quote] Simple answer, I just overlooked this assuming compute debugging would be more like CUDA debugging. Followed these instructions and it works great.
  9. I would like to debug my DirectCompute shader. NVIDIA's NSight website claims that it supports DirectCompute for GPGPU debugging, but their documentation only shows how to debug CUDA C++ code. I have successfully used NSight to do graphics debugging and it works great - I run NSight on my laptop, which copies and launches my application on my desktop PC, and allows me to debug remotely. I can't seem to figure out how to get compute shader debugging to work, though. I tried putting a breakpoint inside the compute shader function of my .fx file, but it doesn't trigger when my C++ application calls Dispatch for that shader. Could it have something to do with the fact that my application compiles all my shaders at runtime? Has anyone had any success debugging their DirectCompute HLSL code using NVIDIA NSight? If so, any guidance would be much appreciated! Thanks, Tim
  10. I am using a pixel shader to put some data into a texture. Typically, with a float4 formatted texture, you would output RGBA color data to the texture where each color component is a 0.0 - 1.0 float value. I'm trying to use the pixel shader to store non-color data. This texture is not meant for display. Instead, once the texture is filled, I convert the texture texels to a different binary format using a compute shader (due to the nature of the data, it makes sense for me to output this data with a pixel shader). When outputting to the texture from my pixel shader, I would like to store some uint values instead of floats in the Y, Z, W components. So here is an example of how I'm trying to return from the pixel shader: [source lang="cpp"] return float4(floatValue, asfloat(firstUintValue), asfloat(secondUintValue), asfloat(thirdUintValue)); [/source] I do this because I don't want to cast the uint values to float, but rather maintain their binary equivalent. However, when I read from the texture using my compute shader and convert these values back to uint using the asuint(texel.Y) function, they do not seem to be the same value I attempted to store in the first place. Actually, most the time I seem to get ZERO values out of this. I know that I have supplied my compute shader with the texture as a shader resource properly, because I am able to retrieve the X component of the texels, which you'll notice above was a regular float (between 0.0 and 1.0). Does the pixel shader require output to be 0.0 - 1.0 floats and do automatic adjustments otherwise? Thanks you for your time and assistance.
  11. Well, after trying a few other ways to do this, I put it back to how I had it and... now it works! Magic. I have no idea what changed since my first attempt, but is now working as I'd originally expected. I apologize, as I feel like I wasted your time with this question. But now using a DXGI_FORMAT_R32G32B32A32_FLOAT texture, I'm able to store UINT values using asfloat() and asuint() to convert back and forth between Pixel and Compute shaders.
  12. Okay. The values I'd like to store consist of one float and 3 uint values. Think using DXGI_FORMAT_R32G32B32A32_TYPELESS instead of DXGI_FORMAT_R32G32B32A32_FLOAT would prevent unexpected conversions from occurring?
  13. What kind of conversions? I just figured that since I was using the asfloat() function to store my UINT values, the texture would accept it as a float - how would the texture know the difference that it is actually a binary representation of a UINT? Unless the texture requires that the value be a value color-component value between 0.0 and 1.0. I'll have to think about this. The reason I'm doing it this way is because I actually am storing graphical data - I still take advantage of the way the pixel shader projects the data onto my texture using transformation matrices, and I also need it to take care of depth buffering and resolution. However, I don't care about color - instead I have other data to keep track of, which is why I was trying to use the color-component values to store other information.
  14. I will also ask if anyone can recommend other methods of compute shader debugging. If possible, I'd really like to be able to debug my shader in the context of my application so that I can see for certain the data and parameters it has been given from my application.
  15. I am trying to better understand the limitations implied by the register keyword for HLSL buffers, textures, and samplers. I will explain my understanding of it, then pose a couple questions. Any corrections, verification, or clarification on this topic is much appreciated. Let's take constant buffers for example. The first parameter of VSSetConstantBuffers is Start Slot, which I believe corresponds to the register defined for a cbuffer in HLSL. So, if you have: [source lang="cpp"]cbuffer MyConstantBuffer : register(b2) { ... };[/source] Then in order to supply an ID3D11Buffer resource for that cbuffer, then you would call: [source lang="cpp"]myContext->VSSetConstantBuffer(2, 1, & myBufferResource);[/source] Where "2" is the slot/register number indicated by "b2" in HLSL. In the MSDN documentation for VSSetConstantBuffer, the Start Slot parameter can be from 0 to D3D11_COMMONSHADER_CONSTANT_BUFFER_API_SLOT_COUNT - 1. That would be 0 to 13 on my machine. So I assume that means I am limited to 14 cbuffers in my HLSL code, with registers ranging from "b0" to "b13". As I understand it, these registers can't be reused for multiple, "incompatible" constant buffers. But is it also correct to assume that the scope of this limitation is per compiled shader? Currently I have all of my vertex and pixel shaders in a single file. So when the shaders are compiled, they are sharing the same configuration of constant buffers, textures, and samplers. With this setup I'm limited to 14 total constant buffers between all shaders. But if I separated out all my shaders to different files with their own constant buffer declarations, I could have 14 constant buffers PER shader since they would be completely separated from each other at compile time. Is this correct? Then when I call VSSetConstantBuffer, I will provide the Start Slot number specific to the shader I'm about to use. I hope this has made sense. Let me know if I am misunderstanding this at all. At first I thought that D3D11 programs were actually limited to 14 total constant buffers, but I found this hard to believe for large scale projects.
  16. Thanks to both MikeBMcL and MJP for your input. This really helps clarify things for me. Always helps to understand a little bit better how things work, even the things that happen "behind the scenes".
  17. I understand that. The thing I'm still unsure about is whether you can use the same register number for multiple cbuffer structures in the same HLSL file, as long as you don't use more than one of those in a given pipeline stage. For example, can you do this... [source lang="cpp"]cbuffer BufferForVertexShader : register(b0) { float variable1; float variable2; uint variable3; }; cbuffer BufferForPixelShader : register(b0) { float4x4 variable1; float4 variable2; int variable3; int variable4; };[/source] ... as long as you only access one cbuffer from either pipeline stage and have the proper buffer bound to slot 0 for each shader? Or does this cause compilation problems within the same HLSL file and require you to separate them out into different files? Thanks.
  18. I have a point list of vertices(D3D11_PRIMITIVE_TOPOLOGY_POINTLIST) that I wish to render to two render targets. I was hoping I'd be able to set both render targets simultaneously using OMSetRenderTargets. However, my situation is unique in that I want each point (Vertex) to be projected differently onto the two render targets. If I'm understanding the use of multiple render targets correctly, you can only specify different output COLORS for each render target from the Pixel Shader using the SV_Target semantic - using an index suffix to specify the render target (SV_Target0, SV_Target1). But by this stage in the pipeline, the projection position of the pixel has already been determined. It appears that the SV_Position semantic doesn't give you the same option to specify multiple projection positions for a vertex from the Vertex Shader. Is that correct? Can anyone think of another way I could do pull this off? Or will I need to perform two separate draw calls on this set of vertices? Thanks in advance for any help and suggestions. Tim
  19. No problem. I can do it in two separate draw calls, I was just wondering if I could improve efficiency since the input data is exactly the same. Thanks for the info!
  20. I have not tried this. I thought I read on another forum post that for registers to be reused they have to be "compatible" in some way. But I understand what you mean. If you have two different constant buffers defined with the same register slot in HLSL, maybe it is okay to use one or the other from a given shader as long as that slot has been provided with appropriate data from C++ first. Anyone else know the answer? I won't know for sure until I have time to set up an experiment. Thanks, Tim
  21. Thanks for the suggestion. I looked this over, but it will not work for me as my render targets are not an array type (and can't be as they are different dimensions). Any other ideas?
  22. Tim Coolman

    Blurred Texture Sampling Minification

    You guys are my heroes. Rather than passing NULL for the pLoadInfo parameter, I passed in a D3DX11_IMAGE_LOAD_INFO structure where I only set the MipLevels value to 1 (left all the rest at defaults). Worked like a charm! I learn something new everyday. I didn't realize that DirectX would generate MIPs for you. I thought you had to supply your own variations of a texture to setup MIP filtering. Guess I should do some more reading up on MIPs and see what other kinds of trouble I can get myself into ;) Thanks again. Tim
  23. I am writing a program in C++ using DirectX 11. As part of my user interface, I want to display buttons whose background texture consists of a rectangle with rounded edges. However, I want to be able to make these buttons of various size, without having to maintain the same aspect ratio of the source texture. But I also don't want to corners to be squished or stretched when changing the size. Below is my original source image that I'm using for my texture/shader resource. Opaque black border and opaque white filling. The small area outside the border's corners is transparent. [attachment=9819:Button Background Template.png] To address this, instead of using a simple quad of four vertices and mapping the texture directly to the four corners, I created a surface consisting of a 4x4 grid of vertices. The idea being that as I change the size of my button, I can maintain the original dimensions of the four corners, and just stretch or shrink the edges and center areas. I've tried to give a representation of what I'm talking about in the image below. The values on the left are texture 'Y' coordinates (the 'X' coordinates would be the same going from left to right). I'm also showing that I've given a color to the vertices as well (because in my pixel shader I am blending the vertex color with the sampled texture color). [attachment=9822:Button Background Template Description.png] As you can see from the images below, my plan works great when I expand the size of the button. However, if I shrink the size of the button relative to the original texture, the corners look correct, but the edge portions of the texture sampling gives a blurred result. (Ignore that the gradient isn't linear from top to bottom - I am aware of the reason for this. My focus is on the blurring) (The original texture shown above is 100x100 pixels. The expanded button's dimensions are 400x200; the shrunk button's dimensions are 75x42). [attachment=9820:Expanded.png] [attachment=9821:Shrunk.png] I am confident that the issue is not with the blending that I'm doing - I removed that portion from the shader so I was doing a simple texture sample, but the blurred edges were still there. I am fairly new to texture sampling, so I'm thinking maybe I have something wrong in my sample state that is causing the blur. Here is my current sampler setup: [source lang="cpp"] D3D11_SAMPLER_DESC samplerDescription; ZeroMemory(& samplerDescription, sizeof(samplerDescription)); samplerDescription.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR; samplerDescription.AddressU = D3D11_TEXTURE_ADDRESS_CLAMP; samplerDescription.AddressV = D3D11_TEXTURE_ADDRESS_CLAMP; samplerDescription.AddressW = D3D11_TEXTURE_ADDRESS_CLAMP; samplerDescription.ComparisonFunc = D3D11_COMPARISON_NEVER; samplerDescription.MinLOD = 0; samplerDescription.MaxLOD = D3D11_FLOAT32_MAX;[/source] And if it helps, here is my Pixel Shader: [source lang="cpp"]float4 PixelShaderTextureBlendHUD(PixelShaderInput pixelIn) : SV_Target { // Sample the texture for a color at the interpolated texture coordinate. float4 outColor = DiffuseMap.Sample(SampleLinear, pixelIn.TextureCoordinate); // Do multiplicate blending of the interpolated vertex color with the sampled color. outColor *= pixelIn.Diffuse; // If constant buffer 'IsGrayscale' flag is set, calculate the grayscale version of the color. if(IsGrayscale > 0) { float luminance = outColor.x * 0.3f + outColor.y * 0.59f + outColor.z * 0.11f; outColor = float4(luminance, luminance, luminance, outColor.w); } return outColor; }[/source] What might be causing the sampler to blur the image this way when the display area is smaller than the are of the texture being sampled? I tried using a Point sampler instead of Linear - that looked slightly different, but just as bad. Any tips would be greatly appreciated!
  24. Tim Coolman

    Blurred Texture Sampling Minification

    To be honest I don't have any experience using MIP filtering. How would I go about turning it off? To use my texture in my pixel shader, I currently create an ID3D11ShaderResourceView using the D3DX11CreateShaderResourceViewFromFile function (my image is a PNG file). As I understand, the D3DX11 library is deprecated, but at this point I've yet to learn how else to create a texture from a file for use as an ID3D11ShaderResourceView. So, using the D3DX11CreateShaderResourceViewFromFile function, I see the pLoadInfo parameter does contain a MipLevels field, however I'm currently passing NULL in that parameter. By default, do multiple MIP levels get created from a single image file? I have not explicitly created various sizes of the same texture - so unless this is done automatically, I don't know why MIP filtering would even be happening with my current code.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!