Jump to content
  • Advertisement
Sign in to follow this  
Tree Penguin

Realtime HDR Rendering with SM3.0 question

This topic is 5085 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, i read a few articles/papers on it and i wondered if there's a way to retrieve the final 32bit per component image to store or modify it later on. I mean the 32bit per component color values before tone mapping and clamping occurs (eg. range 0.0 to MAX_FLOAT). The answer might be obvious for those who are experienced with PS coding and therefor my question might seem stupid but please keep in mind that i am not yet experienced with PS coding at all because of my crappy old hardware. Any help would be appreciated. EDIT: or even better, is it possible to pass multiple float4 variables back to the app or some 0.0-MAX_FLOAT limited buffer which i could read in the app? Thanks. [Edited by - Tree Penguin on September 14, 2004 12:58:39 PM]

Share this post


Link to post
Share on other sites
Advertisement
Could you render it to a texture instead of the framebuffer?

It'd have to be a 32bit floating point format texture which i know nothing about [smile] but that seems like a logical way of doing it.

Andy

Share this post


Link to post
Share on other sites
Yeah i thought of that but i don't know if all texture formats are supported as render target (16 and 32 bit floating point per component textures are supported). And if they are they will probably get clamped to a 0.0-1.0 range.

Share this post


Link to post
Share on other sites
apprently, no they arent clamped (although, i've not tested this myself I admit).
iirc with NV cards you have to be explicate about wanting it to be floating point but with ATI cards it just works once you setup a floating point context and bind a floating point texture to it.

Share this post


Link to post
Share on other sites
Ok, thanks. I am targeting for NVidia CG (which is afaik not compatible with ATI cards (am i right?)) so i think i will have to test it at an NVidia card. You gave me some hope anyway ;).

The reason i am hoping to use 16 or 32 bit float per component is because i am working at an animation renderer (an old project of mine which i resumed working on) and i at least want to make it possible to do such high quality rendering.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!