myers

Members
  • Content count

    94
  • Joined

  • Last visited

Community Reputation

143 Neutral

About myers

  • Rank
    Member
  1.   I thought it looked too good to be true at first as well. Unfortunately, trying it out confirmed this. The inaccuracies quickly become quite noticeable if there's a significant difference between the brightness of overlapping fragments. The main issue is that the transparent fragments don't look ordered at all, so a bright fragment in front of a dull one looks identical to a dull one in front of a bright one. In my view, this looks quite glaringly wrong.   I now use per-pixel linked lists to do OIT. It's obviously more expensive than the weighted average approach, but it produces 100% accurate results, and I think it's worth the tradeoff.   I still use weighted average as a fallback, though. It remains preferable to no OIT at all, and might be okay if you're only using it for fast-moving particles or something. I'd recommend giving it a shot, anyway, as it's pretty trivial to implement.
  2. On DX11, disabling depth writes allows you simultaneously to sample and depth-test against the same depth texture. This feature is useful, for example, for soft particles: without it, you'd either have to make a copy of the depth texture, or disable depth testing and lose early-z culling.   Is this supported in OpenGL 4+? If I call glDepthMask(0), is it safe to assume that I can sample from the depth texture bound as an attachment of the FBO I'm rendering to? I see nothing in the specs to indicate that it is. I don't notice any problems on my hardware, but I've found that a lot of off-spec behaviour works ok on Nvidia.
  3. Quote:Original post by Matias Goldberg Quote:Original post by myers But I'm not sure how it would help with this problem: even if you use a large blur kernel, wouldn't that just "spread out" the flickering, rather than making it less noticeable? Yes, it might, but you sure alleviate the problem (and sometimes a lot). Imagine just two pixels, one black (shadowed) another white (lit). When you move the camera or the light, the two pixels shift their colours due to sampling problems (the white pixel becomes the black one and vice versa) If you use filtering, both pixels will be grey. Furthermore if both are equally gray, even if they constantly switch places due to sampling imprecisions, they would still have the same colour aka. no flicker at all. That was a simplified example, but on the big picture, errors and flickering get blended with correct pixels producing smoother results. Of course very large kernels will introduce other problems, such as pixels that should be fully lit becoming slightly shadowed. Or (more glaringly, I think) pixels that should be shadowed becoming lit. I hope to get a chance to implement ESM this week and report back, but bad experiences have made me generally wary of covering up artifacts simply by increasing blur size. If this is the standard approach, though, I'll certainly give it a try. Someone mentioned Crysis - I believe they use VSM, so I'll have to check that out and see how shadows cast by moving vegetation look. Quote:Original post by 2square See the section "Moving the Light in Texel-Sized Increments" in Common Techniques to Improve Shadow Depth Maps Hope this helps. That discusses a slightly different issue, though, which is achieving stability with CSM when the camera moves, not the light.
  4. Quote:Original post by Matias Goldberg Which shadow mapping technique are you using? For large outdoor scenes, PSSM (Parallel-Split Shadow Map) is recommended. Assassin's Creed II combines it with VSM (Variance Shadow maps) to hide even further the subtle (but annoying) flickering of day-night cycles. IIRC, Burnout Paradise uses VSM too to hide day-night cycle flickering. So... which technique are you using? Cheers Dark Sylinc CSM. At least, I think so - I'm a little unclear on the difference between CSM and PSSM. Basically, I split the light frustum in three and use a map per split. I did initially also use VSM, but dropped it because it resulted in pretty bad light leaks and was actually slower than vanilla SM, even with PCF. I'll have a look at ESM, as smasherprog suggests. But I'm not sure how it would help with this problem: even if you use a large blur kernel, wouldn't that just "spread out" the flickering, rather than making it less noticeable?
  5. One of the problems of shadow maps which receives little attention (as far as I can see) is that their limited resolution is particularly troublesome when a light is in motion, and that perspective-warping approaches to maximising resolution actually exacerbate this. It's not worth worrying about when lights move quickly, but slow, subtle movements cause very noticeable flickering at shadow edges even when the texel-pixel ratio is fairly high. The cause of this is obvious, but I'm having trouble finding any suggestions for remedying it. I expected that it would be a problem, but didn't notice how bad it looked until I implemented a dynamic time-of-day system. Because the sun moves across the sky at a rate of a fraction of a degree per frame, the shadow edges swim constantly. I've tried two ways of mitigating this. One is to move the sun in greater, less frequent increments. This looks slightly better, arguably, as there is no incessant swimming, but this is replaced by dramatic "jumps" when the light does move which are even more noticeable. The other approach I tried was having two shadow maps, with the light at slightly different positions for each, and blending between them. As expected, this pretty much eliminates the whole visual problem, but at the cost of a doubling of memory usage, rendering passes, and lookups. Accumulating both passes onto a single buffer would obviously reduce these overheads, but I don't see how that's possible. This must be a fairly common problem, as it will arise in any situation involving a slow-moving light - any system that moves a shadow-casting sun in real time (or even 10x, 20x, 30x real time) will be faced with it. Is there a less expensive way of addressing it than my solution?
  6. shared_ptr looks interesting. Thanks for the tips, all.
  7. Quote:Original post by c_olin Quote:Original post by myers Quote:Original post by visitor Standard containers expect the value types to be copyable and assignable. If these operations work properly, it shouldn't make any difference to you, which object had its destructor called. That'll be the problem, then: I can either have copy and assign methods which reallocate and deallocate large chunks of memory, or just use vectors of pointers. Think I'll go with pointers. But then the objects are no longer contiguous in memory, and you might as well be using a list instead of a vector. Using a vector lets me retain random access, though.
  8. Quote:Original post by visitor Standard containers expect the value types to be copyable and assignable. If these operations work properly, it shouldn't make any difference to you, which object had its destructor called. That'll be the problem, then: I can either have copy and assign methods which reallocate and deallocate large chunks of memory, or just use vectors of pointers. Think I'll go with pointers.
  9. It matters if the destructor dynamically deallocates memory. class SomeClass { SomeOtherClass * foo; SomeClass::SomeClass() foo = new SomeOtherClass; SomeClass::~SomeClass() delete foo; }; If you use vector::erase to remove a SomeClass object from a vector, and its destructor is not called, the memory pointed to by foo will not be deallocated. Additionally, the final element in the vector will have its destructor called, so foo will be unexpectedly freed. I suppose the simplest way round this is to use vectors of pointers for non-POD types, and delete them manually.
  10. Just a quick question to check I'm not going insane: Am I correct in observing that, when you delete an element from a vector using erase(), the destructor of the erased element will not be called, but the destructor of the last element in the vector will? Perhaps it's implementation-dependent, but I guess the reason it's happening is that all the elements after the erased one are being shifted down, so the destructor is called on the final, "hanging" element. I can't find this documented anywhere, but it seems like something that should be warned about since the naive expectation would be for the destructor of the "erased" element to be called. Can anyone confirm that what I'm experiencing is the expected behaviour?
  11. I'm not worried about sorting. It's just that alpha blending won't work. The decals lit by any given light should only be blended with the scene geometry which is also lit by that light. The decals are rendered after scene geometry, so with alpha blending, if a pixel is affected by lights 1 and 2, say, the decal pass for light 1 would be blended with the scene geometry pass for lights 1 and 2, rather than just light 1. This is the problem which the Tom Forsyth article I linked to attempts to solve - and it does solve it, except, as I've found, in cases where decals overlap.
  12. Thanks for the replies. Quote:Original post by RDragon1 Deferred lighting makes this super trivial and less costly. Render all your geometry, render all of your decals to just the albedo buffer, render your lights, and everything just works. Yes, deferred lighting would do it. But it would require a pretty radical redesign of my pipeline, which seems like overkill just to solve this problem. I'd prefer to avoid deferring lighting generally, since it comes with a number of issues (performing multisampling on pre-DX10 hardware, transparency etc). Quote:Original post by snake5 2 ideas.. 1. don't use alpha-blended decals 2. try to multiply the lighting of a decal with its alpha. 1. I don't think alpha-masking would be sufficient for most of the decals. 2. Hmm, you mean pre-multiply the decals' RGB channels by their alpha? I'm not sure I see how that would help. The decals would still be blended additively, so they would still brighten each other when overlapping. It would certainly be very easy to pre-multiply the decal textures, however, so if that would solve the problem in some way that I'm missing, I'd be very interested in hearing about it.
  13. Anyone know the usual way of doing this?
  14. I'm rendering a scene using multipass lighting. After all the opaque passes have been done, I add decals (dirt, holes in walls etc) to the scene. This involves running each lighting pass again, only for the decals, blending them additively to the scene. Because the decals are being added to an already lit scene, I use something like the technique described in the "Premultiplied alpha" article here (basically, doing an extra pass between the opaque and decal passes to darken the scene where the decals will be rendered) so that the lighting is correct. This all works nicely, except for cases where decals overlap. When that happens, the overlapping parts end up too bright. This is because, while each lighting pass must be additively blended with the framebuffer, the individual decals should instead be alpha-blended with each other. The only way I can see to avoid this is to render each decal lighting pass using alpha-blending to an offscreen buffer, and then additively blend that buffer onto the scene. But this would be costly. Not only would it add the expense of the blit itself, I would also have to copy the scene's depth to this offscreen buffer so that the decals would be occluded properly. And this would be per light. So I don't think it's particularly practical. Is there another way of getting the lighting right on overlapping decals? Perhaps I'm overlooking a blending function that will make this work, or a technique in the pixel shader? Thanks for any assistance.
  15. Shadow Technique Name

    Sounds like ordinary shadow mapping to me - deep shadow maps would be more for translucent shadows. Just remember to write to alpha in the shader so fragments can be alpha-tested.