Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Everything posted by freka586

  1. I am currently investigating a scenario where content is drawn using incorrect/unexpected texture and/or texture content. The problem is intermittent, which makes troubleshooting much more difficult. Currently the most likely candidate seems to be that SetTexture for some reason seems to not have been executed, resulting in the previously used texture remaining set. The exact same operation succeeds a moment later, so actual textures and their content are fine. I have high-level try/catch blocks, so if SlimDX would have thrown an exception (via CheckHResult) I would have seen it. So, are there other cases where a SetTexture call can fail to succeed/complete, without it being serious enough for SlimDX to throw an exception? What about other result codes, such as WasStillDrawing, would they prevent completion and if so, would they result in an Exception being thrown? Thanks in advance, Fredrik [Edited by - freka586 on May 31, 2010 6:19:48 AM]
  2. OK, and all results except Success are counted as failures? Can you imagine any scenario where underlying DirectX calls may return Success when something has gone wrong? Running out of process adress space (OutOfMemory) is for instance not uncommon, so if LockRectangle results in memory allocation (despite being in the Managed pool), perhaps that could be a case of failure? Sorry for being so vague, but troubleshooting based on a two separate occurances of "should never happen" is hard! Issues on a lower level is of course another possibility, as Nvidia Quadro is a common factor for both cases. But I still would like to explore aspects of my code before going down that road..
  3. I'm working on a .NET application that requires the use of accelerated graphics, currently DirectX 9.0c. The software is quite graphics intensive and must, in addition, be launchable from a CD or by ClickOnce without the user requiring administrator's permissions. I currently use SlimDX, but the users are getting rather annoyed by having to install the DirectX redistributable. Especially since this does require elevated permissions. It is rather hard to explain to them why the version of DirectX already bundled with their OS is not sufficient. After all - DirectX 9.0c has been around since 2004, and I'm not using any new fancy features. The ability to deliver an application that "just works" in Vista or Windows 7 (or even XP SP3), without any particular additional prerequisites would be a huge advantage. Therefore: Is there any way I can adjust/compile SlimDX such that it relies only on the libraries provided in the standard Windows Vista/Win7 installation? That is - without requiring a particular DirectX redistributable to be installed? Thanks in advance!
  4. Thanks for your reply! In my case, Effects would probably be the remaining problem. I use only a small portion of it, but I have no real feeling about how much lowlevel coding would be needed to work around it. To make sure I understand correctly, is no version of D3DX shipped with the OS? Or is it that particular/different versions are shipped depending on OS, and that SlimDX requires a single version of it?
  5. freka586

    UpdateSurface on VolumeTexture?

    Come to think of it, this most likely has nothing to do with SlimDX. If I knew how to do this in basic DirectX, then the same approach would most likely work just fine in SlimDX as well. Updated title to reflect that! I can figure out the syntactic differences if someone has a clue how solve this problem in DirectX? [Edited by - freka586 on March 8, 2010 4:31:55 AM]
  6. First a small remark - I plan to explore this myself as soon as I get to a computer with proper development enviroment. But I thought it best to check this with the community as well, since that will be some time away for me. Is it possible to use the approach of UpdateSurface also on VolumeTextures? Or Volume, which would be the 3D equivalent of surface, right? Basically what I would like to do is as efficiently as possible fill the content of a Pool.Default VolumeTexture with the content of a Pool.System VolumeTexture. a) Is there an equivalent to UpdateSurface for this, or some other preferred approach? b) Any flags/options that might be of interest to reach maximum efficiency, as the amount of data for volume textures are typically fairly large? c) Are partial updates also OK, indicated by a Box region? What would the performance impact be, am I perhaps even better left at flushing the entire texture to reduce overhead? Thanks in advance, Fredrik [Edited by - freka586 on March 8, 2010 4:29:09 AM]
  7. Hi, I just wanted to check if there are any known issues with running PIX on SlimDX programs using 64-bit Vista? My basic attempt consisted of launching the 64-bit PIX and pointing it to e.g. the MiniTri sample program I just built. This always seems to result in "Target Program Unexpected Exit". Now, I have never previously used PIX so I may be missing some vital parts here. Sorry if that is the case... If I do not want to perform any fancy kind of profiling, would I still need to do modifications to the MiniTri project? My initial idea has been that things should run out-of-the-box, but one can *hinder* PIX by adding compile flags? Thanks in advance, Fredrik
  8. freka586


    Yeah, I figured that this is some case of user error! It turns out that it works fine when running against x86-built versions of MiniTri but fails ("Failure creating process") when trying on explicit 64-bit build, and the previously mentioned error when using AnyCPU. I can probably manage with using 32-bit compiled version for now, but it would be very interesting to hear if anyone has successfully used "full 64-bit" (OS, PIX, code). And the steps/circumstances needed to get there... I noticed some previous posts on issues with PIX on 64-bit, but I am using the March 2009 which solved those problems.
  9. When working under tight time pressure (as I am currently) it is always tempting to look for "the quick fix" instead of spending time to learn a new tool. Despite the fact that the latter *without a doubt* will save plenty of time in the long run. This time I will take the time needed to get familiar with PIX, thanks for the push!
  10. I am struggling a bit with getting a RenderTarget approach to work. Here's what I am doing: 1) Create a render target texture with the same size and format as the backbuffer. 2) Render the main scene into the rendertarget. 3) Render a textured quad covering the entire view, using the render target texture. A pixel shader is used that samples the texture and (optionally) performs some adjustments. However, as soon as I try and sample the texture in my shader for assigning output color, the result is full black. The following are my observations: a) If I remove the shader parts, the rendertarget texture is mapped correctly onto my quad. So both the main scene render and the quad seems valid. b) If I only set a fixed output color in my shader, without sampling the texture, this also gets applied as expected. So at least the shader gets executed as it should. So, given this vague description, any ideas on what could be going wrong? Are there any quick common mistakes I need to double check? I am will aware that code samples makes a world of difference. However getting some fairly clean code samples requires a fair bit of effort at this point, so I'd like to put that off at least for initial troubleshooting...
  11. It turns out that I was a bit too quick in putting the blame on BeginScene/EndScene. I had made multiple changes, including some testing on the shader side. Things work well when using PS2.0 for my shader, but not when using PS3.0. And I must go for the latter due to the actual shader (not this test program) requiring many texture lookups. When playing around a bit it looks as if the texture coordinates are not correctly set. In some cases before I have seen that PS3 sometimes requires VS3 input, even if just a passthrough shader. But trying this also fails. Some comments on the code below (modified SlimDX MiniTri sample): a) Changing to PS2 and outputting texcoords shows nice colors as expected b) Doing the same with PS3 results in no colors at all c) The same when using PS3 and VS3. But there may of course be problems with how my vertex part is setup! d) I am extremely aware that resource handling is not very sane in this sample, but I wanted to contain as much of the sample in the same method as possible. // -------------- static void Application_Idle(object sender, EventArgs e) { while (AppStillIdle) { Device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.Black, 1.0f, 0); Device.BeginScene(); Surface previousRT = Device.GetRenderTarget(0); Texture renderTargetTexture = new Texture(Device, previousRT.Description.Width, previousRT.Description.Height, 1, Usage.RenderTarget, previousRT.Description.Format, Pool.Default); using (Surface renderTarget = renderTargetTexture.GetSurfaceLevel(0)) { Device.SetRenderTarget(0, renderTarget); // Stub for scene rendering, drawing a triangle with pretty colors. Device.SetStreamSource(0, Vertices, 0, 20); Device.VertexFormat = VertexFormat.PositionRhw | VertexFormat.Diffuse; Device.DrawPrimitives(PrimitiveType.TriangleList, 0, 1); } Device.SetRenderTarget(0, previousRT); string errors = null; string effectPath = @"C:\ImageShader.fx"; Effect effect = Effect.FromFile(Device, effectPath, null, null, null, ShaderFlags.None, null, out errors); effect.Technique = "BasicTechnique"; EffectHandle colorHandle = effect.GetParameter(null, "colorTexture"); effect.SetTexture(colorHandle, renderTargetTexture); Matrix wvp = Device.GetTransform(TransformState.World) * Device.GetTransform(TransformState.View) * Device.GetTransform(TransformState.Projection); effect.SetValue("worldViewProj", wvp); Viewport viewport = Device.Viewport; Device.SetTransform(TransformState.Projection, Matrix.OrthoOffCenterRH( viewport.X, viewport.X + viewport.Width - 1, viewport.Y, viewport.Y + viewport.Height - 1, 0, 1)); effect.Begin(FX.None); effect.BeginPass(0); VertexBuffer quadVertices = new VertexBuffer(Device, 4 * PositionTextured.SizeBytes, Usage.None, PositionTextured.Format, Pool.Managed); CreateVertices(quadVertices, viewport.Width, viewport.Height); Device.SetRenderState(RenderState.CullMode, Cull.None); Device.VertexFormat = PositionTextured.Format; Device.SetStreamSource(0, quadVertices, 0, PositionTextured.SizeBytes); Device.DrawPrimitives(PrimitiveType.TriangleStrip, 0, 2); effect.EndPass(); effect.End(); effect.Dispose(); effect = null; renderTargetTexture.Dispose(); renderTargetTexture = null; quadVertices.Dispose(); quadVertices = null; Device.EndScene(); Device.Present(); } } // ------- ImageShader.fx ---------- Texture colorTexture; uniform float4x4 worldViewProj : WORLDVIEWPROJ; sampler colorSampler = sampler_state { texture = <colorTexture>; AddressU = Clamp; AddressV = Clamp; AddressW = Clamp; MinFilter = Point; MagFilter = Point; }; // Dummy vertex shader to do the world/view/projection transformation. // This is normally not needed but sometimes a ps_3_0 shader // won't run unless there's a vs_3_0 shader. void DummyVS(in float4 inPos : POSITION0, out float4 outPos : POSITION0) { outPos = mul(inPos, worldViewProj); } void SimpleShader(in float3 textureCoords : TEXCOORD0, out float4 outColor : COLOR0) { float4 sample = tex2D(colorSampler, textureCoords); outColor = 1; //outColor.rgb = textureCoords.xyz; outColor.rgb = sample.xyz; } technique BasicTechnique { pass P0 { Texture[0] = <colorTexture>; //VertexShader = compile vs_3_0 DummyVS(); PixelShader = compile ps_3_0 SimpleShader(); } }
  12. When messing around a bit, it looks as if at least one source of error was illegal changes withing the same BeginScene-EndScene block. The following seems to work: Set RT to RT-texture BeginScene RenderMainScene EndScene Set RT to back buffer BeginScene RenderAndShade EndScene But not if all is done within a single BeginScene-EndScene block (omitting the inner ones). Does this make sense?
  13. Sorry, I didn't get that last advice? What I have already tried is just sampling the texture and setting that as output (without additional adjustments), and that did not work. Will try and look into PIX, as I have not had the pleasure of using that before!
  14. I am struggling with some image resampling problems that I hope the community can help me shed some light on! I have a basic grayscale image, lets say the size is 2048x2048. This is mapped/displayed on a region that, in screen space, is 1050x1050 pixels. The mapping is straightforward and planar, ie no tilting planes or complex polygons or anything. Due to high-frequency components in the image I am now suffering from *severe* artefacts. The interpolation that performs the mapping between 2048x2048 and 1050x1050 simply does not cope. When "zooming" a bit I soon reach a point where a downsampled version, 1024x1024, fits perfectly and now the texture mapping does not introduce additional artefacts. My question at this point is what options I have for handling the first scenario (maximum difference in size between source and destination size)? a) I tried swapping the Linear min/mag filter for other options, but without visible results. b) I have also explored using multisampling (NonMaskable, TwoSample, FourSample) but without success. Should either or both of thesehave solved my problem? Ie, could there be errors in my testing or is my problem not of the nature that those tools would help me?
  15. freka586

    Multisample questions

    Here's an example of a sort of worst case scenario. Different types of images show different artefacts, I guess depending on the frequency content of the source image. Here we get folding distorsion/aliasing in the form of vertical lines. For some images there are square patterns of varying scales. Please click "twice" to look at the image in full resolution, as downsampled representations would add problems of their own. The image shows, from left to right: a) Large scale differens (sourceWidth/destinationWidth close to 2, but slightly larger). No edge enhancement b) Exactly the same as [a] but with strong edge enhancement filter shader used. c) The same as but zoom changed a small bit, such that a mipmap of half the size matches perfectly. As for anistropic settings, I tinkered with them a bit, but without luck. My knowledge in that area is slim, but I thought this would only help when the polygon edge suffers from "perspective" problems. Ie not being aligned with the view?
  16. freka586

    Multisample questions

    Come to think of it, precomputing "mipmaps" on my own with tighter increments than full factors of 2 might actually be possible. The actual frequency problem would still be the same, but I might cut out the very worst cases by introducing more intermediate reconstructions. I'll probably look into this options as well!
  17. freka586

    Multisample questions

    Actually, the downsampling is only the first half of my problem... But I wanted to keep things as clear as possible for the initial discussion. When doing the downsampling (and unfortunately I can neither use integer ratios nor use upsampling when slightly "over" the limit), the quality is only rarely problematic. The bigger problem is that in some scenarios I must also apply an edge enhancement shader. As you can imagine that does not exactly lessen the HF content. The result is some serious artefacts, and it is mainly due to this scenario that I was exploring my options. I had a hunch that multisampling was more relevant when texturing polygons more complex than my simple static square. I might still look more deeply into mipmapping, but it a first glance it did not seem to offer any help. When the ratio between texture and destination size is an integer factor the results are nice. It is the points just before this that are the problem. I think my best bet at this point is to probably opt for a more sophisticated edge enhancement filter, that perhaps contains an adjustable smoothing component that would allow be to more gracefully handle the different scenarios. Thanks for the input so far, and please do not hesitate to suggest any ideas whatsoever. I am still at an experimental phase, so most options are still open!
  18. I am using SlimDX for a 2D image viewer/renderer. Internally there is only one device, but multiple swap chains associated with the image viewers multiple image controls. However, when the application has been idle a long time (no quantitive measurement, but over night seems to be sufficient), something goes seriously wrong. Basically any action executed against the device from here on goes wrong. An example is: Presenting rendered image caused exception: SlimDX.Direct3D9.Direct3D9Exception: D3DERR_INVALIDCALL: Invalid call (-2005530516) at SlimDX.Result.Throw[T](Object dataKey, Object dataValue) at SlimDX.Result.Record[T](Int32 hr, Boolean failed, Object dataKey, Object dataValue) at SlimDX.Result.Record[T](Int32 hr, Object dataKey, Object dataValue) at SlimDX.Direct3D9.Device.Reset(PresentParameters presentParameters) However, within the same application there is a different context, where a new device is actually created when executed. And this one seems to work also after the extended period of waiting. So my main questions are: 1) Can I prevent an existing device/state from becoming invalid after a long period of inactivity? Perhaps some form of "sponsorship" that needs to be renewed, or a lifespan explicitly specified? 2) If no, can I detect that this situation has occured, to allow me distinguish this particular error scenario from my normal error handling logic? Thanks in advance, Fredrik
  19. freka586

    [SlimDX] Limited object lifetime?

    Unfortunately this was on a release build. I'll try and reproduce things in a controlled debug environment as soon as possible, just wanted to see if this was a known issue with know fix. Regards, Fredrik
  20. That's really nice! I had no idea that this kind of construction can be handled compile time, and have been avoiding it because of branching penalties until now. This far I have been focusing mainly on single pass filtering, to avoid having intermediate rendertarget texture(s). Are there any neat shortcuts here also that can be of use? My end goal for this area is to create an edge enhancement filter. The main challenges are performance (re-apply for each frame as result cannot be reused) and texture resolution (2k x 2k to 4k x 4k typically). There are also memory constraint due to large textures being kept for a long time. I have consider doing a multipass approach such as separate steps of Laplacian of Gaussian och unsharp mask, but to avoid rendertarget textures I have for now used a composite filter kernel of the LoG filter.
  21. I am testing around a bit with some basic image filtering in HLSL. However I have yet to find a suitable way to pass an array to the shader program, without first specifying the number of elements. This might be a strict requirement, in order for the HLSL compiler to properly do its work. But I'd still like to check this with the people here, as it would nice to have the shader completely generic when it comes to filter size... OK ----- float filterWeight3x3[9]; float filterWeight5x5[25]; NOT OK ----- float filterWeights[];
  22. Hey, I'm open for all kinds of ideas! Your Effect approach sounds interesting, care to elaborate a bit more? Auto-generation sounds like it might also prevent the code duplication I am looking for... At least as an initial approach, my filter kernels would all be square (3x3, 5x5, 7x7, perhaps larger), and executed in a single pass. Any additional comments, suggestions, ideas or experiences on the topic would be greatly appreciated! Thanks in advance, Fredrik
  23. Um, it sounds so simply now that you say it... Will definately try that one! And I guess for scheduling etc. it should make no difference if the array is specified to 49 elements, but only 9 are provided and accessed?
  24. I create big textures. Seriously big. However, regardless of using Default or Managed memory pool, I cannot seem to avoid getting a corresponding allocation in my process. Can this at all be avoided? If there is a temporary allocation while filling the texture this is of course something different, what I am looking for is a way to avoid the constant "overhead" corresponding to the sum of all textures currently in use. In my case the most significant ones will be VolumeTextures. When trying this, the actual texture creating does not make a significant change in virtual memory size for my process. However as soon as I try and fill the texture the chunk of memory is allocated and not released. My hunch is that SlimDX has no part in this, and that it is either something that cannot be done, or simply a matter of using the right approach. Hence the [SlimDX?] title.
  25. In the case of volumetextures that would be GetVolumeLevel followed by Volume.FromMemory. Thanks for the input, I'll try this once I get a chance!
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!