Few questions about practical volumetric scattering implementation

Started by
7 comments, last by Lightness1024 11 years, 6 months ago
Ok so I've managed to get this effect working as the common post process implementation by Kenny Mitchell, but...

I'm a little clueless as how to practically use this in a scene.
What I'm currently doing is clearing the "occlusion target" (the target that has the black geometry rendered on top) to a color, which is then being scattered. Now of course this is not what I want as the whole background is completely white and not just e.g. the sun.

How would you fill out the parts where it has to be white or a specific color ?
Also in my deferred renderer what I currently do is render this occlusion information into a single color channel during GBuffer creation.
Is there a better way to do this ? And doing it this way I assume I can't really color this light, having only a single channel ?

I've seen this feature being used practically in the UDK integrated as a checkbox on any light source that you can place.
So let's assume I've got a point light placed somewhere. How would I fill the occlusion target with the white (or the lightcolor) needed to make this effect work on that specific point light ? (Also if it's behind me I need to do some angle checking, right ?)

screenshot of my implementation here:
http://cl.ly/image/3v3k0V0I2w2K
Advertisement
no suggestions :/ ?
you intend to use that effect on multiple lights and updated each frame ? because this requires some serious sampling (~50 to 100 tex2 per pixel).
Also, this effect is very difficult to control artistically because it adds a desaturating whitish to sky zones that you would have prefered to keep their intense blue. I have considered using another operator than 'add', e.g i've tried multiply to darken the shadowed zone, but it was ugly, also needed to be scaled up (because 0 = black)
mix of add and mul.. ugly as well.
I don't believe in the volume scattering effect alltogether. it can only work when made by hand by artists on specific zones, like cathedral windows, or small openings in dark areas. but as a global effect ...
also, you don't really need an occlusion target, you already have an albedo target, the presence of data is already a tag for object's presence. but for this to work you need to keep the sky rendering outside of the deferred system. also i'm not sure this effect will be well behaved for lights that are not at 'infinity'.
First of all thanks for answering.
I'm not trying to use this as a "global effect". I'm just not sure how to implement this correctly in a deferred environment and how exactly I'd render these bright parts where the light source is located. You said I could use the albedo target as the occlusion target ? So the bright parts are the parts of the skybox that are already bright (maybe put in a threshold in there?) and then have the albedo color blur with the scattering shader ? But wouldn't that blur pretty much the whole screen ? And also in various colors ?
yes, the rendering target would be either directly the back buffer, or your final composed hdr texture, it is a post effect after all. only the texture plugged as a source data at this point would be your albedo channel of the gbuffer, and indeed, when you read it, you use a formula to decide what is light and what is black (e.g. a threshold like you said). so you don't blur the albedo, you blur some black and white that you add to your destination HDR.
and that is the part where it hurts, because adding something always brigthen the image, and often this effect gives desaturated white/greyish images.
Alright so I've managed to get the effect itself working quite nicely. My only problem now is the screenspace position of the light source.
How exactly would I know at what position the bright part on the skybox is located at ?
Obviously if I just assign a value like float2(0.5f, 0.4f) it points in the right direction but moves around with the camera quite a lot because there's no transformation going on.
The author wrote something like : "The light position in screen space is computed by the standard world-view-project transform and is scaled and biased to obtain coordinates in the range [-1, 1].". What is that supposed to mean ? When I use a coordinate out of my head like float3(10.0f, 10f, 1.0f) and transform this with my world view projection matrix all I'm getting is some random values around 3.0f - 5.0f. Does anyone have an idea how to correctly calculate these ?

In my code it looks like this:

XMVECTOR worldPos = XMVectorSet(0.0f, 50.0f, 50.0f, 1.0f);
XMMATRIX world = XMMatrixIdentity();
XMMATRIX view = this->renderer->GetEngine()->GetCamera()->GetViewMatrix();
XMMATRIX proj = this->renderer->GetEngine()->GetCamera()->GetProjectionMatrix();
XMVECTOR screenPos = XMVector4Transform(worldPos, world);
screenPos = XMVector4Transform(screenPos, view);
screenPos = XMVector4Transform(screenPos, proj);
XMStoreFloat2(&pData->LightPositionSS, screenPos);
yes me too I am dubious regarding this passage of the original paragraph.
what he meant is that you "just" have to use your math library, like glm, to project the light position into your view. however, in case of sun, Because there is no light position, it is located at the "infinite". But there could be a way, using a "light proxy", with the same concept that the sky box itself, making it move with the view.
so, you could find the "world position" of the sun by doing LP = cameraPosition - lightDirection. that should be enough.
Then project LP using this:
[source lang="cpp"]
float4 proj = mul(ViewProjMatrix, float4(LP,1.f));
float2 pos = float2(proj) / proj.w;
[/source]
then you can pass pos as a uniform variable to your shader to determine the blur strike direction.
ok so I've just tried that out but the light rays seem to rotate around too fast...
Shouldn't it be steady ? Am I doing something wrong ?
It's also not pointing towards me but to the left of the screen.


XMMATRIX view = this->renderer->GetEngine()->GetCamera()->GetViewMatrixNonTransposed();
XMMATRIX proj = this->renderer->GetEngine()->GetCamera()->GetProjectionMatrixNonTransposed();
XMMATRIX viewproj = view * proj;
XMFLOAT3 lightDir = this->renderer->GetLightManager()->GetDirectionalLights().at(0)->GetLightDirection();
XMVECTOR LightDirection = XMLoadFloat3(&lightDir);
XMVECTOR LP = this->renderer->GetEngine()->GetCamera()->GetCamPosition() - LightDirection;
LP = XMVectorSet(XMVectorGetX(LP), XMVectorGetY(LP), XMVectorGetZ(LP), XMVectorGetW(LP));
LP = XMVectorSetW(LP, 1.0f);
//LP = XMVectorSet(500.0f, 500.0f, 500.0f, 1.0f);
XMVECTOR projectedVec = XMVector4Transform(LP, viewproj);
XMFLOAT4 projectedV;
XMStoreFloat4(&projectedV, projectedVec);
pData->LightPositionSS = XMFLOAT2(projectedV.x, projectedV.y);
pData->LightPositionSS.x /= projectedV.w;
pData->LightPositionSS.y /= projectedV.w;
oh yes, there is a final step after a raw projection like this.
because 0,0 would be the sun directly on your view direction. some people would want that to be 0.5, 0.5 (screen space).
so to transform to screen space, you just need to do * 0.5 + 0.5 :) (in float2)
also after that, most conventions results in a reverted Y behavior, so just finish by:
LightSS.y = 1. - LightSS.y

This topic is closed to new replies.

Advertisement