Packing Data Advice

Started by
14 comments, last by mauro78 11 years, 6 months ago
Yes, bandwidth might be a performance problem, but even games which supports lot of (older) hardware writes up to 4 buffers at once(=> StarCraft II, 4x 16bit float RGBA). Putting 3 independent values in the range of [0..1] into a single 16 bit float is really hard without loosing lot of quality. Try to write up your render engine first and see if you can optimize it later on.
Advertisement

[quote name='MegaPixel' timestamp='1348755712' post='4984362']
I personally use one render target 16bit fp only as everything get modulated in one go:

return dffuseTerm*lightColor

diffuseTerm should be what you call in-scatter and light color is the rgb color of a given light.

I don't see why you have to use so many render target to store the computed lighting ... any reason for that ?

The problem, IMHO, is more relevant when you have to chose the number of gbuffers and their number of bit per channel as that is one thing that can influence your bandwidth.


MegaPixel, In-scatter is not diffuse lighting that's why I need more space :-)

http://en.wikipedia...._scattering....

my final color is:
fragment color = (diffuse fragment color*texture_color) + inscatter fragment color

regards

p.s.
>The problem, IMHO, is more relevant when you have to chose the number of gbuffers and their number of bit per channel as that is one thing that can influence your >bandwidth.

I agree but my problem is not about this (g-Buffer creation phase)...but in the next one: accumulation phase
[/quote]

Ohh you meant indirect lighting ?

[quote name='MegaPixel' timestamp='1348755712' post='4984362']
I personally use one render target 16bit fp only as everything get modulated in one go:

return dffuseTerm*lightColor

diffuseTerm should be what you call in-scatter and light color is the rgb color of a given light.

I don't see why you have to use so many render target to store the computed lighting ... any reason for that ?

The problem, IMHO, is more relevant when you have to chose the number of gbuffers and their number of bit per channel as that is one thing that can influence your bandwidth.


MegaPixel, In-scatter is not diffuse lighting that's why I need more space :-)

http://en.wikipedia...._scattering....

my final color is:
fragment color = (diffuse fragment color*texture_color) + inscatter fragment color

regards

p.s.
>The problem, IMHO, is more relevant when you have to chose the number of gbuffers and their number of bit per channel as that is one thing that can influence your >bandwidth.

I agree but my problem is not about this (g-Buffer creation phase)...but in the next one: accumulation phase
[/quote]

The accumulation phase is not a problem, just use another render target smile.png. I personally use a different render target for my indirect lighting smile.png ...

Again, the problem is normally the gbuffer phase, in a typical pipeline when you need to compose different results it's very normal to subdivide those in more passes, with each

pass it's own render target etc. and then compose them at the end.

[quote name='mauro78' timestamp='1348756396' post='4984365']
[quote name='MegaPixel' timestamp='1348755712' post='4984362']
I personally use one render target 16bit fp only as everything get modulated in one go:

return dffuseTerm*lightColor

diffuseTerm should be what you call in-scatter and light color is the rgb color of a given light.

I don't see why you have to use so many render target to store the computed lighting ... any reason for that ?

The problem, IMHO, is more relevant when you have to chose the number of gbuffers and their number of bit per channel as that is one thing that can influence your bandwidth.


MegaPixel, In-scatter is not diffuse lighting that's why I need more space :-)

http://en.wikipedia...._scattering....

my final color is:
fragment color = (diffuse fragment color*texture_color) + inscatter fragment color

regards

p.s.
>The problem, IMHO, is more relevant when you have to chose the number of gbuffers and their number of bit per channel as that is one thing that can influence your >bandwidth.

I agree but my problem is not about this (g-Buffer creation phase)...but in the next one: accumulation phase
[/quote]

Ohh you meant indirect lighting ?
[/quote]

No It's not.
Diffuse computation model the interaction between photons and surface
Scattered light model the interaction between photons and athmosphere or other medium. Here You can have 3 types of results in-scattered,out-scattered,absorbed.

The resulting effects is sort of volumetric effects.....

[quote name='MegaPixel' timestamp='1348818490' post='4984649']
[quote name='mauro78' timestamp='1348756396' post='4984365']
[quote name='MegaPixel' timestamp='1348755712' post='4984362']
I personally use one render target 16bit fp only as everything get modulated in one go:

return dffuseTerm*lightColor

diffuseTerm should be what you call in-scatter and light color is the rgb color of a given light.

I don't see why you have to use so many render target to store the computed lighting ... any reason for that ?

The problem, IMHO, is more relevant when you have to chose the number of gbuffers and their number of bit per channel as that is one thing that can influence your bandwidth.


MegaPixel, In-scatter is not diffuse lighting that's why I need more space :-)

http://en.wikipedia...._scattering....

my final color is:
fragment color = (diffuse fragment color*texture_color) + inscatter fragment color

regards

p.s.
>The problem, IMHO, is more relevant when you have to chose the number of gbuffers and their number of bit per channel as that is one thing that can influence your >bandwidth.

I agree but my problem is not about this (g-Buffer creation phase)...but in the next one: accumulation phase
[/quote]

Ohh you meant indirect lighting ?
[/quote]

No It's not.
Diffuse computation model the interaction between photons and surface
Scattered light model the interaction between photons and athmosphere or other medium. Here You can have 3 types of results in-scattered,out-scattered,absorbed.

The resulting effects is sort of volumetric effects.....
[/quote]

Ok so you are talking about sky light (the light reflected from the sky)? Light shafts ? ...

[quote name='mauro78' timestamp='1348847187' post='4984749']
[quote name='MegaPixel' timestamp='1348818490' post='4984649']
[quote name='mauro78' timestamp='1348756396' post='4984365']
[quote name='MegaPixel' timestamp='1348755712' post='4984362']
I personally use one render target 16bit fp only as everything get modulated in one go:

return dffuseTerm*lightColor

diffuseTerm should be what you call in-scatter and light color is the rgb color of a given light.

I don't see why you have to use so many render target to store the computed lighting ... any reason for that ?

The problem, IMHO, is more relevant when you have to chose the number of gbuffers and their number of bit per channel as that is one thing that can influence your bandwidth.


MegaPixel, In-scatter is not diffuse lighting that's why I need more space :-)

http://en.wikipedia...._scattering....

my final color is:
fragment color = (diffuse fragment color*texture_color) + inscatter fragment color

regards

p.s.
>The problem, IMHO, is more relevant when you have to chose the number of gbuffers and their number of bit per channel as that is one thing that can influence your >bandwidth.

I agree but my problem is not about this (g-Buffer creation phase)...but in the next one: accumulation phase
[/quote]

Ohh you meant indirect lighting ?
[/quote]

No It's not.
Diffuse computation model the interaction between photons and surface
Scattered light model the interaction between photons and athmosphere or other medium. Here You can have 3 types of results in-scattered,out-scattered,absorbed.

The resulting effects is sort of volumetric effects.....
[/quote]

Ok so you are talking about sky light (the light reflected from the sky)? Light shafts ? ...
[/quote]

Yeah MegaPixel, Light Shafts are a perfect example of in-scattering wink.png

This topic is closed to new replies.

Advertisement