sirob

Members
  • Content count

    2066
  • Joined

  • Last visited

Community Reputation

1181 Excellent

About sirob

  • Rank
    Contributor
  1. The difference between the samples is to be expected - since they use a different technique for rendering. The DX9 sample uses Transformed Vertices (VertexFormat.PositionRhw), which means it is specifying the vertex positions in pixels. On the other hand, the D3D10 sample uses a simple pass-through shader, which causes the vertices to go through some of the transformation pipeline. The difference in the behavior when the window is resized is due to the difference in how each sample draws.
  2. DX11 D3DXVECTOR3 and D3DX 11

    You should be able to use D3DX10 properly with 11. As for having to link both, D3DX10Math.h looks like it includes an inline part. There's a good chance (hadn't tried it yet) it has everything you need to compile. You could give just including it a roll and see if it breaks. [On second thought, that would never work] Also, why is linking D3DX10 a problem? Anything with 11 support should have the 10 dlls, I would think.
  3. What does "Every application crashed the driver after 5-6 seconds " mean? Does Windows display a message in the notification area saying that the driver crashed and is being reset? Does your application fail with an exception or other kind of error? If it does, what is the error? More directly on the subject, the fact that it takes 5-6 seconds sounds, to me, like it's possible you're somehow allocating objects every frame and never freeing them. It's not necessarily an object you're holding, it might be something SlimDX is doing for you. I'd focus my search (when comparing the SlimDX framework and your implementation) on Get*() calls that run every frame on your implementation, and might be allocating a new object each call. If the driver has a bug, it might crash after allocating X objects. A simple Dispose of the correct object might fix it, but finding out which one you need to dispose, without adding incorrect dispose calls might be difficult and non-trivial. Also, I remember a fix relating to some of the DXGI objects and them being reallocated - Are you using the latest version of SlimDX? Have you tried the latest SVN version?
  4. The texture wrapping mode (WRAP, CLAMP (the one you want), etc.) are part of the sampler state. In the code you pasted, you are setting the sampler state: Sampler[0] = (Sampler); // Needed by pixel shader I'm not too sure about the syntax here, but I think you might be setting the state to 0 somehow, or maybe a sampler state oddly named (Sampler) appears in one of your includes. Either way, you'd set it something like this: sampler yourSampler = sampler_state { ADDRESSU = CLAMP; ADDRESSV = CLAMP; }; Then you'd set "yourSampler" as the active sampler state. Hope this helps.
  5. The contents of the constant buffer would only have to be set once per frame (assuming they are the same for all draw calls). You'll still need to set the constant buffer once per effect, but that setting should remain between frames, so you'd only need to set it once when you create a new effect. After that, you never touch it, only update the contents of the constant buffer.
  6. Actually, running "video" (as long as it has a very limited number of frames) is rather simple with GPU accelerated APIs. Since the GPU just renders the whole screen every frame, you can simply pick the correct frame for each video instance. As far as the software is concerned, it is simply rendering several instances of different frames at different locations on the screen. Any decent GPU should be fully capable of drawing multiple "videos" at the same time on different parts of the screen (under people's fingers, in your case). I'm sure there are other more complex options, but given your already available art, this sounds like a solution that should be easy to implement and will probably scale well. Hope this helps.
  7. Using the new features around Runtime-Shader-Linkage is only supported on 11 class hardware. You can't select an implementation for a class at runtime on older hardware. As such, you can't pass anything to the pClassLinkage parameter (since that would set implementations at runtime). You should be able to hard-code implementations in your shader code, and it should compile and run correctly. As for the lost effect pool funcionality, while I don't have any specific answers about new functionality to replace it, it doesn't sound like a huge loss. IIRC, there were several issues with Effect Pools, and implementing similar functionality (using a shader constant buffer) shouldn't be all that difficult. Have you considered just using one constant buffer for all the shared stuff? Hope this helps.
  8. You've said it yourself:Quote: Marshal.SizeOf(typeof(test_buff)) returns 8 as it supposed to be in this case So, 8 * 1024 = 8192 which is certainly above the upper allowed limit of 2048, which is stated in the error message. Looks like your buffer is simply too large.
  9. Sounds like something you might be able to pull off with occlusion queries, but even then, you can expect quite some time between when you make the draw call, and when you get the result (since the GPU doesn't actually draw immediately).
  10. disable depth test

    Of course, by changing the DepthStencilState used while rendering, you can define the behavior of the depth test. By setting DepthEnable to false, all depth testing will be disabled.
  11. When the device is reset, every setting on it should be considered "uninitialized". That is, you need to set every value again. This is mostly an issue with the settings you normally set at application start, since other settings get set every frame anyways. In addition, make sure all calls (other than reset) are also succeeding. If you forgot to release/recreate an object, reset might succeed but attempts to use the resource might fail. Lastly, reset is sometimes a difficult method to call correctly. Some drivers will just outright fail when you do something they don't like. While I'm confident the situation is better than it used to be, you might still run into issues on specific hardware. If you're not using D3DX extensively, the difference in code between calling Reset and Release/Create on the device should be minor, so you could compare the results of the two to help identify issues.
  12. Which version of the API are you planning to use? DirectX9, 10, or 11?
  13. I ran into this same issue when trying to write a quick D3D11 test app about 2 weeks ago. I'm quite sure MikeP has already submitted a fix for this issue. Are you using the latest SVN revision?
  14. Sounds like a leak of GPU resources of some kind, eventually resulting in an error due to lack of memory of some kind. Are you verifying all results for resource creation? Maybe you forgot a dispose somewhere? Are you even creating any resources per-frame?
  15. The "Texture Shader" was only ever a convenience thing. It allowed you to fill a texture by writing "HLSL" code instead of filling the texture using your own code. You could easily generate an array with the same values as this shader generates and create a new texture initialized to the array's values. [EDIT] also, you could write code to do it once and save the texture to a .dds file, and then load that instead.