Forward rendering limitations?

Started by
28 comments, last by zedz 14 years, 4 months ago
I believe this can help either:
http://diaryofagraphicsprogrammer.blogspot.com/2009/11/order-independent-transparency.html
---Gandalf once said: "Keep it secret, Keep it safe". I say: Keep it simple, Keep it GREAThttp://www.realityshape.pt.vu
Advertisement
Quote:And the 3 levels of transparency are more or less fixed so you cant fade in/out objects smoothly, and transparency loses may uses this way

You understand "transparency layers" incorrect. It means there are 3 layers that view ray hits on its path, not 3 alpha grades. Alpha can be 0-255 as usual.
Quote:Original post by KRIGSSVIN
Quote:And the 3 levels of transparency are more or less fixed so you cant fade in/out objects smoothly, and transparency loses may uses this way

You understand "transparency layers" incorrect. It means there are 3 layers that view ray hits on its path, not 3 alpha grades. Alpha can be 0-255 as usual.


I have never worked with stippling, but i assume there are certain transparency percentages you cant achieve since you´re "discretizing" it. In a 9x9 pixels area there are a few possible stipple patterns and you´re stuck with these transparency levels. So maybe you have 20 transparency grades or so for a single layer, with more layers, less levels... If you wanted more you would have to resort to more complex patterns, and bigger blur kernels, i suppose. All this is off the top of my head, i haven´t tried it.

However i don´t really like this trick, i´d rather use reverse depth peeling or something along these lines.

EDIT:In the inferred lighting paper the DSF is a weighted bilinear filter so that means your transparency levels are even more restricted since you have less pixels to work with. :( Unless i am not understanding it properly i think that technique is pretty useless...
Quote:Original post by ArKano22
I have never worked with stippling, but i assume there are certain transparency percentages you cant achieve since you´re "discretizing" it. In a 9x9 pixels area there are a few possible stipple patterns and you´re stuck with these transparency levels. So maybe you have 20 transparency grades or so for a single layer, with more layers, less levels... If you wanted more you would have to resort to more complex patterns, and bigger blur kernels, i suppose. All this is off the top of my head, i haven´t tried it.

However i don´t really like this trick, i´d rather use reverse depth peeling or something along these lines.

EDIT:In the inferred lighting paper the DSF is a weighted bilinear filter so that means your transparency levels are even more restricted since you have less pixels to work with. :( Unless i am not understanding it properly i think that technique is pretty useless...


I think the stippling is used to select the correct sample, I don't think there any "discretizing" going on. Could be wrong, need some more time to digest this method.
Cheers,MartinIf I've helped you, a rating++ would be appreciated
Martin is right, when you use stippling, each fourth sample represent FOUR fragments of an alpha surface, which can be of any alpha grade. Read about inferred lighting more to completely understand what I mean.
Quote:Original post by KRIGSSVIN
Martin is right, when you use stippling, each fourth sample represent FOUR fragments of an alpha surface, which can be of any alpha grade. Read about inferred lighting more to completely understand what I mean.


I´ve re-read the paper, yes, you are right :). It stores interleaved lighting info about the translucent pixel, not the actual lighting result to blend with nearby samples. So you can reconstruct the real alpha value later. That´s cool!

I´m already implementing deferred and its so simple ^^. I´m glad i tried it.
>>disagree that is requires more memory see the light accumilation method

Im not sure what u mean light accumulation method, I was talking about more memory as u need to set aside GPU memory for the buffers (normals, specular etc)

>>This is why just about any working implementation will use bounding volumes for lights combined with depth/stencil testing. Even with a fullscreen quad you can use a scissor test to cull most of the pixels you're not interested in.

true (u can also do this with forward rendering btw) but the OP mentioned that each light is gonna shade most(all) the pixels on the screen

just for a laugh I just tried my latest game with 101 point lights + 1 spotlight + 1 direction light (all onscreen)

(the dust particles are translucent thus wont work for deffered to well, thus has to be rendered here >100x once for each light)



~40fps at 1650x1150@4xAA on my slowish card, not to bad (only 1 pointlight == ~65fps for comparison) true deferred would be quicker perhaps ~45fps

i.e. it depends very much on what the user wants, thats why I mentioned to the OP post a screenshot of what theyre trying to achieve
zedz, if you are not already doing so with your deferred rendering, I recommend reserving a stencil bit for "opaque", the shot you show has a ton of alpha-blended/clipped pixels because very little of the viewport is occupied by opaque objects.
ta patw, though Im using forward rendering
I change my guess of deferred rendering in the above cause being ~10% quicker, I now think it would be roughly similar in FPS to forward rendering with ~100 lights

This topic is closed to new replies.

Advertisement