Advertisement Jump to content
  • Advertisement

AndyTX

Member
  • Content Count

    1153
  • Joined

  • Last visited

Community Reputation

806 Good

About AndyTX

  • Rank
    Contributor
  1. AndyTX

    the best ssao ive seen

    Took a quick look at your RenderMonkey project and it looks great! Runs pretty fast on my 5870 too... when I get some time I'd like to write up a compute shader version where some local neighborhood of the depth buffer is cached in local memory to save on bandwidth and see how that performs. One quick note: in DX9 you need to offset by a half pixel when sampling the G-buffer texture to get a 1:1 pixel-to-texel mapping. Add this code to the end of your "do ssao" vertex shader to get rid of the diagonal line in your output: o.position.xy -= g_inv_screen_size; Note that this isn't required in DX10+ or OpenGL. Again, very neat! I'll try to find some time to dive in properly and take a look at the math in the next little while. Cheers!
  2. Can you post screenshots? Do you have issues with very small light sizes (and this small soft edges)? While the receiver planar bias is pretty much the best you can do with PCF it starts becoming bad with too large lights, since the receiver no longer approximates a plane over areas that large.
  3. AndyTX

    How to disable color write DX11/10

    Those errors look to be due to you trying to render to your depth buffer while it is still bound (as a shader resource, probably from a previous pass or frame) to the pipeline. After doing a rendering "pass" it's good to zero-out the resources that you had bound, which will prevent these issues (where the runtime detects a potential R/W hazard). You can do that via something like: // Unbind render targets d3dDeviceContext->OMSetRenderTargets(0, 0, 0); // Unbind shader resources ID3D11ShaderResourceView* nullViews[8] = {0, 0, 0, 0, 0, 0, 0, 0}; d3dDeviceContext->PSSetShaderResources(0, 8, nullViews); ...
  4. AndyTX

    ShaderX7 Practical CSM

    I haven't read the chapter, but that doesn't sound right to me either. The light space -> cascade transform is just a 2D linear scale/bias ('zoom') which is happily done in the pixel shader after projecting into light space, even for spot lights. Even doing the full projection into light space in the pixel shader should be just fine too, provided you manually apply the projective divide and viewport transform in accordance with the API you're using. I'd have to see the context, but naively I agree with you... that doesn't seem right.
  5. AndyTX

    VSM filtering

    Quote:Original post by B_old What do you mean by 1-z? Right now I am outputting the depth manually, but maybe I should give it a try to just sample from the z-buffer. If you're writing something out manually it should be pretty much the exact same result as the hardware resolve if you're doing a box filter for the resolve step. With a "standard" depth buffer, to get more precision you can use 1-z together with a floating point depth buffer. See this paper for more details: http://doi.acm.org/10.1145/311534.311579
  6. AndyTX

    VSM filtering

    If you're doing it from a z-buffer, you probably want to use a 1-z fp32 buffer for precision reasons.
  7. AndyTX

    VSM filtering

    Resolving then blur is the easiest. There's one clever thing you can do by sampling the MSAA texture though ("custom resolve"), and that's converting from depth to the VSM/EVSM representation on the fly, while resolving. This allows you to render to a "standard" multisampled shadow map (one fp32) and then convert to a VSM/EVSM and resolve on the fly. This saves memory footprint and bandwidth as well as enabling some of the more efficient hardware "Z-only" optimizations. These aren't as significant as they used to be, but they can still make a difference In either case, it's probably always best to do the resolve before the blurring.
  8. AndyTX

    [shadows] VSM - flickering ?

    Quote:Original post by NiGoea But I guess that, once implemented, this problem wont dissapear. It will help a lot, but probably not "disappear" entirely. Quote:Original post by NiGoea I'm curios to see how they solved this problem in AAA titles. Almost all AAA titles I know of use CSMs nowadays. And they don't entirely solve this problem - it's quite visible in many of them. In other cases they simple avoid moving the light, etc. Quote:Original post by NiGoea Funnily, this is a basic thing, but a thing that I don't know how to use. How do I activate HW MSAA ?? Depends what API you are using. In DirectX 9 look at MultiSample and MultisampleQuality parameters of CreateRenderTarget. In DX10+, look at the SampleDesc value of D3D10_TEXTURE2D_DESC.
  9. AndyTX

    [shadows] VSM - flickering ?

    You need to use resolution management together with good filtering (VSM or otherwise) to eliminate this. Resolution management (like CSM, TSM, PSM, RMSM, ... ...) will help to solve magnification problems, while filtering solves minification problems.
  10. Quote:Whats the fastest and most realistic shadow rendering method? These two goals are at odds. You can get perfect shadows very slowly or increasingly coarse approximations that run faster. You need to choose a point on that scale that you want to hit for your application - there's no holy grail. This is the same with pretty much everything in graphics.
  11. Everything DX10+ can filter/MSAA all non-integer formats in practice (although not all of them are technically required by DX10). That includes fp32, fp16, and all of the unorm formats. MSAA is supported on everything IIRC, although integer targets have to be manually resolved (for obvious reasons, same with filtering). So no, there really aren't limitations on "modern" cards. Anything in the last few years is pretty orthogonal.
  12. AndyTX

    problem with ddx/ddy

    Quote:Original post by butthead_82 Yes, I'm using openGL and CG. GLSL would have been more convenient with openGL but problems with CG aren't that big. I see. Yeah you can probably figure out how to make this work with Cg, but generally convincing an NVIDIA compiler to spit out code that AMD's will eat (which is already hard enough in OpenGL ;)) is going to be "fun" to say the least unless you're doing fairly trivial stuff. Unofrtunately the current state of GLSL is a bit wacky, with different compilers accepting different, technically "non-standard", code. So you either need to enable some extensions (this includes adding code to the stuff that Cg generates, so hopefully it doesn't fully abstract all that) for the AMD cards, or target a newer version of GLSL (if Cg can do that).
  13. AndyTX

    problem with ddx/ddy

    Quote:Original post by butthead_82 Regarding this, I tested ddx/ddy on nvidia gtx260 and it worked, but on radeon 4850 (if I remember right) did not. Vilem Otte mentioned it's necessary to enable texture2DLod function on ATI cards. Do you think the same counts for ddx/ddy? Now you've got me confused... are you using GLSL then? Or are you using Cg targeting OpenGL? If so, then you probably have to enable some extensions (or target a newer version of GLSL). If you're using OpenGL it might be simpler to just write GLSL directly rather than Cg, but your call.
  14. AndyTX

    problem with ddx/ddy

    Quote:Original post by butthead_82 Little confused now... Actually I'm writing a CG program, not HLSL. tex2D syntax above is something I saw in GPU gems and CG specs. The card is really old, radeon 9800. Right, in Cg I believe that is correct... all of the texturing functions are "overloaded". In HLSL though you should be using tex2Dgrad, and arguably if that works in Cg it's a bit more explicit. And yeah, the 9800 is a SM2.0 card, and as such I doubt it supports gradient texture lookups. DirectX wouldn't have allowed you to compile that shader for a SM2.0 target, but I Cg is just generating code that doesn't work on that card... As an aside, I'm assuming that the shader above is just a small bit of a larger shader that does something interesting with the ddx()/ddy() before passing them to tex2D(grad)? Otherwise I'm not sure what you're trying to gain by computing the derivatives manually then passing them to the function rather than just using the standard tex2D with implicit derivatives. The latter would be slightly faster, more compatible (it would work on your 9800) and would produce similar results.
  15. AndyTX

    problem with ddx/ddy

    You're right in that it actually compiles, but it's not "correct" per se. I suspect it only compiles for compatibility with some old Cg syntax. Check the HLSL docs, but you'll note that tex2D only takes two parameters. However the HLSL compiler does appear to convert it to a texldd if you pass four, even though you should be using tex2Dgrad. Indeed it doesn't explain the problem though. What card are you trying to run this on? tex2Dgrad is SM3+.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!