Jump to content

  • Log In with Google      Sign In   
  • Create Account


Tim Coolman

Member Since 27 Mar 2012
Offline Last Active May 24 2013 12:14 PM

Topics I've Started

Drawing many textured quads at once

22 May 2013 - 02:19 PM

In my DirectX 11 application, I would like to draw a scene consisting of many textured quads. For the sake of efficiency, my first thought was to use instancing to pull this off in a single draw call - four common vertices and an instance buffer containing transformation matrices to handle positioning of each instance, and an index for which texture to sample from. I had hoped I could do this using a single Texture2DArray resource for storing my collection of textures, but the textures all vary in size (though would share the same format). This does not appear to be possible with a Texture2DArray.

 

I would really like to avoid a separate draw call for each of these quads. From what I understand there is overhead involved in draw calls that can create a CPU bottleneck, especially considering I would only be drawing two triangles per call.

 

Anyone have suggestions on the most efficient way to do this?


Model perspective issue in XNA

06 March 2013 - 08:23 PM

I have recently been playing around with XNA for the first time. I have some experience with DirectX 10 and 11, and have also gone through some modeling tutorials for Blender. But this is the first time I've tried to import a model created in Blender.

 

In the project I'm experimenting with, I am drawing a jet model provided in a Microsoft example, and a simple house model I created in Blender and exported to a .x file. The problem I'm having is the perspective of the house is opposite as it should be, relative to the camera. If the house model is in the center of the viewing area, it looks fine - all I see is the front surface of the model. As the model moves to the right of the camera (translation only, no rotation applied), I should begin to see some of the side of the model that is closest to the camera. Instead the opposite side becomes visible. The same happens with up and down movement.

 

The jet model behaves correctly, but I'm using the same view and projection matrix for both models.

 

Here are some screenshots to demonstrate what I'm talking about. Hard to tell with the jet, but the issue with the house is pretty clear. Just looking for some tips as to why this might happen? Hard for me to understand how the model could be the problem, but since I'm using the same matrices for both models, I feel like there must be something wrong with the way I exported the model or something. Thanks in advance for any time given to help me out!

 

Attached File  1.png   25.79KB   40 downloadsAttached File  2.png   25.6KB   36 downloadsAttached File  3.png   25.46KB   35 downloadsAttached File  4.png   28.66KB   42 downloadsAttached File  5.png   29.18KB   37 downloadsAttached File  6.png   26.06KB   38 downloads


Separate input for additional 10-key keypad

30 November 2012 - 02:58 PM

I am writing Windows DirectX 11 software in C++ for which I would like to receive input from both a regular keyboard and a 10-key keypad. I would like a secondary user to be able to input from a 10-key keypad without disrupting the use of the full keyboard by the primary user. For example, if the primary user is typing into a text box, I would like the secondary user to be able to send 10-key data to the software to be handled separately so it does not affect the text box input. I am currently using DirectInput for both mouse and keyboard. But if anyone knows of a solution through the Windows API, I would consider that as well.

When I create my keyboard device in DirectInput, I am currently using the GUID_SysKeyboard value, which lumps both keyboards into one so that my software can't discern the source of keyboard input. Is it possible to use EnumDevices to identify the two keyboards and create separate DirectInput devices? I imagine it would be, but I'm not sure how to go about identifying each device from the DIDEVICEINSTANCE structure provided to the EnumDevices callback. I would like to make this as generic as possible so it can be used with different combinations and models/brands of keyboards.

Thanks in advance for any help or suggestions!

(Note: I posted this same question on StackOverflow)

Storing non-color data in texture from pixel shader

12 November 2012 - 05:02 PM

I am using a pixel shader to put some data into a texture. Typically, with a float4 formatted texture, you would output RGBA color data to the texture where each color component is a 0.0 - 1.0 float value. I'm trying to use the pixel shader to store non-color data. This texture is not meant for display. Instead, once the texture is filled, I convert the texture texels to a different binary format using a compute shader (due to the nature of the data, it makes sense for me to output this data with a pixel shader). When outputting to the texture from my pixel shader, I would like to store some uint values instead of floats in the Y, Z, W components. So here is an example of how I'm trying to return from the pixel shader:
[source lang="cpp"]return float4(floatValue, asfloat(firstUintValue), asfloat(secondUintValue), asfloat(thirdUintValue));[/source]
I do this because I don't want to cast the uint values to float, but rather maintain their binary equivalent.

However, when I read from the texture using my compute shader and convert these values back to uint using the asuint(texel.Y) function, they do not seem to be the same value I attempted to store in the first place. Actually, most the time I seem to get ZERO values out of this.

I know that I have supplied my compute shader with the texture as a shader resource properly, because I am able to retrieve the X component of the texels, which you'll notice above was a regular float (between 0.0 and 1.0).

Does the pixel shader require output to be 0.0 - 1.0 floats and do automatic adjustments otherwise?

Thanks you for your time and assistance.

Debugging DirectCompute Shader with NVIDIA NSight

09 November 2012 - 04:53 PM

I would like to debug my DirectCompute shader. NVIDIA's NSight website claims that it supports DirectCompute for GPGPU debugging, but their documentation only shows how to debug CUDA C++ code. I have successfully used NSight to do graphics debugging and it works great - I run NSight on my laptop, which copies and launches my application on my desktop PC, and allows me to debug remotely. I can't seem to figure out how to get compute shader debugging to work, though. I tried putting a breakpoint inside the compute shader function of my .fx file, but it doesn't trigger when my C++ application calls Dispatch for that shader. Could it have something to do with the fact that my application compiles all my shaders at runtime?

Has anyone had any success debugging their DirectCompute HLSL code using NVIDIA NSight? If so, any guidance would be much appreciated!

Thanks,
Tim

PARTNERS