Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

100 Neutral

About LTKeene

  • Rank
  1. LTKeene

    MDX runtime?

    I see. Would the June 2005 redistributable work for something built against summer 2004? Both are MDX 9.0c apparently. Finally, any else needed by the end user if the app makes use of pixel shaders (v3)?
  2. LTKeene

    MDX runtime?

    Hello, For apps that have been created with Managed DirectX (summer 2004) and will be running on Win7, is it still necessary to install the MDX runtime? I was hoping it would be "baked" into Win7. Anyone? -L
  3. Hello, I'm experimenting with DirectX9 and for the moment I'm loading my pixel shader programs at runtime by doing the following: Effect effect1 = Effect.FromFile(device, path + "MyEffectFile.fx", null, ShaderFlags.None, null, out errors); It's my understanding that the shader is compiled at runtime. Is there a way to precompile the shader file and "lump it in" with the rest of the executable so that the shader text file itself doesn't have to be distributed? -L
  4. LTKeene

    Baffling performance issue

    Is there some sort of benchmark or diagnostic tool that someone knows of that I can download and run which would provide objective numbers?
  5. LTKeene

    Baffling performance issue

    Music seems to play fine, and according to Windows my driver is the latest (although it's dated Jan.). I've tried running the sound loop in worker thread to guarantee it doesn't block, but I still get an intermittent stuttering with larger texture kernels. Dang strange! Is it possible that the instruction count for the larger kernels is exceeding the maximum and DirectX automatically falls back to software implementation of the pixel shader? -L
  6. LTKeene

    Baffling performance issue

    MJP, thanks very much for the reply! I stupidly should have mentioned that all the DirectX device "stuff" is happening on another thread. In other words: 1) Instantiate DirectX device on UI thread, initialize, setup vertices, etc. 2) When necessary launch worker thread which loops and loads buffer to texture, calls device.Present, etc. 3) At the same time, UI thread intermittently processes / loads looping audio buffer. So, I believe if the graphics driver were to block the thread it would be blocking the worker thread (the one calling Present()), no? In that case my UI thread is free to carry on the sine wave processing. I have 4 full cores to play with so I'm dismayed to see the audio get choppy. I can live with a 7x7 kernel but not discontinuous audio. The NAudio library is using WaveOut api...is it possible the audio is being channeled through the graphics card as well, as when the card starts to bog down it affects the audio output? -L
  7. Hello, Let me preface this by saying I am NOT a very good DirectX programmer by any means. I know only enough to accomplish what I was attempting. Briefly, my application is reading frame buffers from a machine vision camera into a texture and processing it on the GPU via a pixel shader (v3) using Managed DirectX (v9...obsolete, I know). The pixel shader program processes the texture by analyzing neighborhoods of pixels in sizes of 5x5, 7x7 or 9x9 centered on the output pixel. The neighborhood is processed and the color of the output pixel is adjusted according to the result of the processing. More or less a convolution (in fact, not even that as I'm not doing any multiplications, just additions over the kernel window). At the same time, the application uses an open audio library called NAudio to play test tones (sine waves) in my app. The sine waves are simultaneously and intermittently computed on the UI thread and loaded into a circular buffer for audio output while the graphics processing is going on. This has been working very smoothly on my desktop machine (GeForce GTS 250 and core i5). I get real time frame rates, smooth updating and seamless audio output. Yesterday I tried this same application on a core i7 laptop with an ATI Mobility Radeon HD 5470 and was surprised to see the graphics and audio performance deteriorate hugely. With the 5x5 neighborhood pixel shader the frame rates were fine and audio was good, but at 9x9 it slowed down to a fraction of the GeForce card in the desktop...about 5 frames a second, no exaggeration. In additon to this I got some pretty bad stuttering in the acoustic sine wave output. When I tax the GPU with larger convolution kernels the audio stuttering becomes really bad. The only time the audio is seamless and the frame rates are good is with the 5x5 kernel. The weird thing is that the convolution processing is occurring on the GPU (supposedly) whereas the audio sine wave generation is on the UI thread, so the larger kernel in the pixel shader shouldn't affect the audio processing. I've tried moving the audio processing to a secondary thread but that made no difference. I've also noticed that interface responsiveness slows down. It's almost as if the pixel shader is running on the CPU and not the GPU. Any ideas what may be going on here? This has me stumped. The device is created with HardwareVertexProcessing and it does support pixel shader 3, but has a max instruction slots value of 512. I've noticed that I am unable to instantiate the device on the laptop with the "PureDevice" argument as this causes an exception, but then again I wasn't using that argument for the desktop version either and that one screams. Any thoughts anyone? Any help much appreciated! -L
  8. First of all, thanks to everyone who's taking the time to read/reply, it's a big help! I need to be clearer about what it is I'm doing exactly: 1) I load the contents of a camera buffer into a byte[,] and do some processing on it on the CPU, not the GPU. The results of this processing are 32-bit floats and are written to an R32F texture which is set aside for step 3. 2) Next, I again load the contents of the camera's buffer (its a little video camera that's constantly updating its internal buffer at a low frame rate of approx. 15 fps) into a byte[,] and transfer that into a texture. This texture is mapped onto my simple quad in transformed coordinates. At this point, when rendered to the screen it looks uncorrected for gamma since the rendered image looks darker than the image portrayed by the byte[,] values. Therefore, I need to enable gamma correction in the device's render state to correct for this. However... 2) What I didn't mention (and probably should have) was that after doing the above, I'm sequentially taking images from the cameras buffer, loading them into a texture and running my pixel shader which taps it. The pixel shader acquires the values via the sampler (1:1 mapping, texture size = render size), and does a bunch of calculations. Based on the result of these calculations and a comparison with the results in step 1 that were written to the float texture, the output color is chosen. So, to summarize...I'm comparing results obtained by operating on sRGB data with those obtained by operating on texture data fed to the pixel shader. The results just aren't what I'm used to when doing everything on the CPU and with the help of people here, it sure looks like I'm comparing apples with oranges (gamma-corrected vs not gamma-corrected pixel data). What I have to do is make sure the GPU achieves the same result as the CPU when operating on these image frames. As a final clarification, let's say I were to load a camera frame into a byte[,] array and perform some mathematical operation on the values in the array. This operation is performed on the CPU and I get some result. Then, I take the same byte[,] but this time I write the values to a texture using the approach in my original post and pass it to the pixel shader. The pixel shader then does the exact same operation and generates a value at each pixel. If I were to somehow compare the results of the pixel shader to those from the CPU, would they be the same or would they be different (since the pixel shader is operating on un-gamma-corrected data)? If different, do I compensate by disabling gamma correction and enabling it on the back-buffer as Matias suggested? Sorry for the long post!
  9. I ran through the link regarding gamma correction...very interesting. I'm a little confused now though. In the code I posted above, is "checkImage" linear or sRGB? Judging by the examples I saw in the presentation, my original pixelBuffer[,] looked like it was gamma corrected, whereas my rendered texture looked like it wasn't gamma corrected (primary values were being pushed towards their extremes, black areas had lost most of the detail, white areas looked like they were saturating). I need to guarantee that the values I'm writing to the texture in my above program are indeed the exact same values that my pixel shader "sees" when it samples the texture. Does gamma correction do this?
  10. Thank you for the reply. This certainly sounds like it may be the problem. Where is the gamma correction applied (i.e. is it a device property that needs to be set)? How does one check to see if it is being applied? Thanks again for the help, everyone. -L
  11. Hello all. I'm doing something very simple: mapping a texture to a quad using MDX(9) with transformed coordinates and rendering it to the screen. The textured quad appears in the correct screen space and everything looks good... except the texture seems to render incorrectly. It's as if the texture image histogram has been modified and is drawn to the screen in a strangely sharpened/contrasty state. I'm loading the texture with values from a byte array (that corresponds initially to an 8-bit grayscale image) so I saved the array to a bitmap, saved it, opened it and looked at it. It appears exactly as I would expect, so I know the source array contains the correct values. The texture is properly positioned and everything, it's just the appearance of the texture itself that looks different by a not-insignificant degree. The byte array is exactly the same dimensions as the quad so there should be a straight 1:1 mapping with no interpolation. Here's the simple source code I'm using...can anyone see what I've done wrong? Thanks in advance! //Values have been loaded into pixelBuffer[,]. Save to a bitmap just // to make sure the values look correct: Bitmap checkImage = new Bitmap(pixelBuffer.GetLength(1), pixelBuffer.GetLength(0), System.Drawing.Imaging.PixelFormat.Format32bppArgb); for (int x = 0; x < pixelBuffer.GetLength(0); x++) { for (int y = 0; y < pixelBuffer.GetLength(1); y++) { byte grayVal = pixelBuffer[x,y]; checkImage.SetPixel(y, x, Color.FromArgb(grayVal, grayVal, grayVal)); } } checkImage.Save("C:\\Users\\Me\\Check Image.bmp"); // Image looks good. // Instantiate a Texture, write to it: Bitmap texBitmap = new Bitmap(renderRectangle.Width, renderRectangle.Height, System.Drawing.Imaging.PixelFormat.Format32bppArgb); backgroundTexture = new Texture(device, texBitmap, 0, Pool.Managed); Surface texSurface = backgroundTexture.GetSurfaceLevel(0); SurfaceDescription description = backgroundTexture.GetLevelDescription(0); // Note: "PixelColor" is a struct containing 4 bytes (a,r,g,b): PixelColor[] textureData = (PixelColor[])texSurface.LockRectangle(typeof(PixelColor), LockFlags.None, description.Width * description.Height); int CurrentTexturePosition = 0; for (int row = 0; row < renderRectangle.Height; row++) { for (int column = 0; column < renderRectangle.Width; column++) { textureData[CurrentTexturePosition].a = 255; textureData[CurrentTexturePosition].r = pixelBuffer[row, column]; textureData[CurrentTexturePosition].g = pixelBuffer[row, column]; textureData[CurrentTexturePosition].b = pixelBuffer[row, column]; CurrentTexturePosition++; } } // Unlock surface: textSurface.UnlockRectangle(); // Setup device and parameters: presentParams = new PresentParameters(); presentParams.Windowed = true; presentParams.SwapEffect = SwapEffect.Discard; device = new Device(0, DeviceType.Hardware, ParentForm, CreateFlags.HardwareVertexProcessing, presentParams); device.RenderState.Lighting = false; device.RenderState.AlphaBlendEnable = true; device.RenderState.SourceBlend = Blend.SourceColor; device.RenderState.DestinationBlend = Blend.InvSourceAlpha; // Initialize the vertices used for the triangles that form // the background quad. Used transformed coordinates. // "renderRectangle" is exactly the same dimensions // as the "pixelBuffer" array: float Left = (float)renderRectangle.Left; float Top = (float)renderRectangle.Top; float Right = (float)renderRectangle.Right; float Bottom = (float)renderRectangle.Bottom; // Use default (clockwise) winding order: backgroundVertices[0].Position = new Vector4(Left, Top, 1.0f, 1.0f); backgroundVertices[0].Tu = 0.0f; backgroundVertices[0].Tv = 0.0f; backgroundVertices[1].Position = new Vector4(Right, Top, 1.0f, 1.0f); backgroundVertices[1].Tu = 1.0f; backgroundVertices[1].Tv = 0.0f; backgroundVertices[2].Position = new Vector4(Right, Bottom, 1.0f, 1.0f); backgroundVertices[2].Tu = 1.0f; backgroundVertices[2].Tv = 1.0f; backgroundVertices[3].Position = new Vector4(Left, Top, 1.0f, 1.0f); backgroundVertices[3].Tu = 0.0f; backgroundVertices[3].Tv = 0.0f; backgroundVertices[4].Position = new Vector4(Right, Bottom, 1.0f, 1.0f); backgroundVertices[4].Tu = 1.0f; backgroundVertices[4].Tv = 1.0f; backgroundVertices[5].Position = new Vector4(Left, Bottom, 1.0f, 1.0f); backgroundVertices[5].Tu = 0.0f; backgroundVertices[5].Tv = 1.0f; // Now loop through the vertices and offset by -0.5 to correctly map the // texels with the pixels: for (int x = 0; x < 6; x++) { backgroundVertices[x].X -= 0.5f; backgroundVertices[x].Y -= 0.5f; } // Clear the DirectX device and render: device.Clear(ClearFlags.Target, Color.Black, 1.0f, 0); device.BeginScene(); device.SetTexture(0, backgroundTexture); device.VertexFormat = CustomVertex.TransformedTextured.Format; device.DrawUserPrimitives(PrimitiveType.TriangleList, 2, backgroundVertices); device.EndScene(); device.Present();
  12. Hello, right now my pixel shader code is looping over an input texture and acquiring values using the texture sampler and "tex2d(...)". I have 3 loops where I loop over exactly the same coordinates, each time calling "tex2D()" to acquire the texture values, which seems like a lot of redundant work to me. Is it better to store the acquired texture values in an array instead during the first loop, and in the second and third loops get the values from the local array rather than calling "tex2D" all over again for each coordinate? If so, what are the rules governing array declarations in a pixel shader? My loops are a little large so I would need an array like: float[,] textureVals = new float[15, 15]; I'm using MDX and compiling for PS3.0. Thanks!
  13. I see, that's straight forward. I've written colors to a texture but never raw floats. Now that I'm looking at how to actually write the data to the texture, I see it's not as straight forward as I anticipated. Is this the correct approach?: // Create texture/surface: LUTtexture = new Texture(device, width, height, 1, 0, Format.R32F, Pool.Managed); Surface LUTsurface = LUTtexture.GetSurfaceLevel(0); SurfaceDescription desc = LUTtexture.GetLevelDescription(0); // Lock texture for writing. Use overload without a "Rectangle" argument to indicate entire surface is to be locked, and use "LockFlags.Discard" since the entire surface is to be overwritten with the LUT data: float[] textureData = (float)LUTsurface.LockRectangle(LockFlags.Discard); textureData[0] = FloatValues[0]; textureData[1] = FloatValues[1]; etc... LUTsurface.UnlockRectangle();
  14. Yes, my card definitely supports branching. I have several conditional branches in my program. But for my experiment I commented everything out other than the two neighborhood loops, and just increased the size of the neighborhoods. I never seemed to hit the 4096 instructions limit, which is puzzling. If the GPU supports loops, does that mean the number of instructions within the loop body only count once and not each time they are executed?
  15. Hello, first a big Thanks to everyone on this forum. It's been really helpful. Now, on to the question: I've been accessing texture colors from within my pixel shader by using: InputColor = tex2D(theTextureSampler, theTextureCoords); I now need to pass some 32-bit floats to the shader to use as a lookup table and it's my understanding that I should use an R32F texture that I have loaded with the necessary values. How does one go about accessing the values in an R32F texture from within the shader program? I've looked online and haven't been able to find the answer so far. Thanks in advance. -L
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!