Jump to content
  • Advertisement

Tim Coolman

  • Content Count

  • Joined

  • Last visited

Community Reputation

171 Neutral

About Tim Coolman

  • Rank
  1. Tim Coolman

    Drawing many textured quads at once

    Thanks unbird and menohack for your suggestions.The texture atlas may be problematic because these aren't static textures that I can layout into an atlas resource in advance - these textures are first rendered by prior Draw calls and may be redrawn frequently. Basically I am drawing many things to these off-screen textures and then compositing them to the screen as quads, which I would like to do with instancing. I will consider the Texture2DArray suggestion using the largest texture for the dimensions.
  2. Tim Coolman

    Drawing many textured quads at once

      Thanks for the suggestion. It would be possible, but I'm still hoping for a more straightforward solution.
  3. In my DirectX 11 application, I would like to draw a scene consisting of many textured quads. For the sake of efficiency, my first thought was to use instancing to pull this off in a single draw call - four common vertices and an instance buffer containing transformation matrices to handle positioning of each instance, and an index for which texture to sample from. I had hoped I could do this using a single Texture2DArray resource for storing my collection of textures, but the textures all vary in size (though would share the same format). This does not appear to be possible with a Texture2DArray.   I would really like to avoid a separate draw call for each of these quads. From what I understand there is overhead involved in draw calls that can create a CPU bottleneck, especially considering I would only be drawing two triangles per call.   Anyone have suggestions on the most efficient way to do this?
  4. Tim Coolman

    Model perspective issue in XNA

      You were right! That line appeared to do the trick. Any idea if there is a way to take care of this on the export from Blender so I don't have to modify the cull mode?
  5. I have recently been playing around with XNA for the first time. I have some experience with DirectX 10 and 11, and have also gone through some modeling tutorials for Blender. But this is the first time I've tried to import a model created in Blender.   In the project I'm experimenting with, I am drawing a jet model provided in a Microsoft example, and a simple house model I created in Blender and exported to a .x file. The problem I'm having is the perspective of the house is opposite as it should be, relative to the camera. If the house model is in the center of the viewing area, it looks fine - all I see is the front surface of the model. As the model moves to the right of the camera (translation only, no rotation applied), I should begin to see some of the side of the model that is closest to the camera. Instead the opposite side becomes visible. The same happens with up and down movement.   The jet model behaves correctly, but I'm using the same view and projection matrix for both models.   Here are some screenshots to demonstrate what I'm talking about. Hard to tell with the jet, but the issue with the house is pretty clear. Just looking for some tips as to why this might happen? Hard for me to understand how the model could be the problem, but since I'm using the same matrices for both models, I feel like there must be something wrong with the way I exported the model or something. Thanks in advance for any time given to help me out!   [attachment=14031:1.png][attachment=14032:2.png][attachment=14033:3.png][attachment=14034:4.png][attachment=14035:5.png][attachment=14036:6.png]
  6. Tim Coolman

    Separate input for additional 10-key keypad

    I'll bump this once just because I posted this topic late on a Friday afternoon. Anyone have any ideas on this?
  7. I posted this question to the nVidia developer forum under NSight Visual Studio, and I got this response from a moderator. Debugging DirectCompute shaders is the similar process as to debugging any other shader. Please take a look at the user's guide, under Graphics Debugger > Shader Debugger.[/quote] Simple answer, I just overlooked this assuming compute debugging would be more like CUDA debugging. Followed these instructions and it works great.
  8. Well, after trying a few other ways to do this, I put it back to how I had it and... now it works! Magic. I have no idea what changed since my first attempt, but is now working as I'd originally expected. I apologize, as I feel like I wasted your time with this question. But now using a DXGI_FORMAT_R32G32B32A32_FLOAT texture, I'm able to store UINT values using asfloat() and asuint() to convert back and forth between Pixel and Compute shaders.
  9. I am writing Windows DirectX 11 software in C++ for which I would like to receive input from both a regular keyboard and a 10-key keypad. I would like a secondary user to be able to input from a 10-key keypad without disrupting the use of the full keyboard by the primary user. For example, if the primary user is typing into a text box, I would like the secondary user to be able to send 10-key data to the software to be handled separately so it does not affect the text box input. I am currently using DirectInput for both mouse and keyboard. But if anyone knows of a solution through the Windows API, I would consider that as well. When I create my keyboard device in DirectInput, I am currently using the GUID_SysKeyboard value, which lumps both keyboards into one so that my software can't discern the source of keyboard input. Is it possible to use EnumDevices to identify the two keyboards and create separate DirectInput devices? I imagine it would be, but I'm not sure how to go about identifying each device from the DIDEVICEINSTANCE structure provided to the EnumDevices callback. I would like to make this as generic as possible so it can be used with different combinations and models/brands of keyboards. Thanks in advance for any help or suggestions! (Note: I posted this same question on StackOverflow)
  10. Okay. The values I'd like to store consist of one float and 3 uint values. Think using DXGI_FORMAT_R32G32B32A32_TYPELESS instead of DXGI_FORMAT_R32G32B32A32_FLOAT would prevent unexpected conversions from occurring?
  11. What kind of conversions? I just figured that since I was using the asfloat() function to store my UINT values, the texture would accept it as a float - how would the texture know the difference that it is actually a binary representation of a UINT? Unless the texture requires that the value be a value color-component value between 0.0 and 1.0. I'll have to think about this. The reason I'm doing it this way is because I actually am storing graphical data - I still take advantage of the way the pixel shader projects the data onto my texture using transformation matrices, and I also need it to take care of depth buffering and resolution. However, I don't care about color - instead I have other data to keep track of, which is why I was trying to use the color-component values to store other information.
  12. I am using a pixel shader to put some data into a texture. Typically, with a float4 formatted texture, you would output RGBA color data to the texture where each color component is a 0.0 - 1.0 float value. I'm trying to use the pixel shader to store non-color data. This texture is not meant for display. Instead, once the texture is filled, I convert the texture texels to a different binary format using a compute shader (due to the nature of the data, it makes sense for me to output this data with a pixel shader). When outputting to the texture from my pixel shader, I would like to store some uint values instead of floats in the Y, Z, W components. So here is an example of how I'm trying to return from the pixel shader: [source lang="cpp"] return float4(floatValue, asfloat(firstUintValue), asfloat(secondUintValue), asfloat(thirdUintValue)); [/source] I do this because I don't want to cast the uint values to float, but rather maintain their binary equivalent. However, when I read from the texture using my compute shader and convert these values back to uint using the asuint(texel.Y) function, they do not seem to be the same value I attempted to store in the first place. Actually, most the time I seem to get ZERO values out of this. I know that I have supplied my compute shader with the texture as a shader resource properly, because I am able to retrieve the X component of the texels, which you'll notice above was a regular float (between 0.0 and 1.0). Does the pixel shader require output to be 0.0 - 1.0 floats and do automatic adjustments otherwise? Thanks you for your time and assistance.
  13. I will also ask if anyone can recommend other methods of compute shader debugging. If possible, I'd really like to be able to debug my shader in the context of my application so that I can see for certain the data and parameters it has been given from my application.
  14. I would like to debug my DirectCompute shader. NVIDIA's NSight website claims that it supports DirectCompute for GPGPU debugging, but their documentation only shows how to debug CUDA C++ code. I have successfully used NSight to do graphics debugging and it works great - I run NSight on my laptop, which copies and launches my application on my desktop PC, and allows me to debug remotely. I can't seem to figure out how to get compute shader debugging to work, though. I tried putting a breakpoint inside the compute shader function of my .fx file, but it doesn't trigger when my C++ application calls Dispatch for that shader. Could it have something to do with the fact that my application compiles all my shaders at runtime? Has anyone had any success debugging their DirectCompute HLSL code using NVIDIA NSight? If so, any guidance would be much appreciated! Thanks, Tim
  15. Thanks to both MikeBMcL and MJP for your input. This really helps clarify things for me. Always helps to understand a little bit better how things work, even the things that happen "behind the scenes".
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!