Jump to content

  • Log In with Google      Sign In   
  • Create Account

#ActualB_old

Posted 09 January 2013 - 02:16 AM

You could try storing just x and y to 16-bit channels and reconstruct z. But you need steal one bit of information from other channels to get good results. If you have this extra bit that you could use this would be easy and fast encode/decode scheme with ok quality

I thought about this as well. I prefer to have the normals in a single render target though. This means I have to cram the x,y and the z sign into a single channel. I have not tried it yet, but it should work. Wonder how it will perform.

 

Spherical coordinates will give you pretty good precision in world space, especially in 16-bit. However you'll eat up some ALU cycles on trig instructions. Another option is to simply transform from world space to view space, compress using spheremap, and decompress/transform back to world space when sampling your G-Buffer. This isn't always as bad as you think, especially on GPU's with oodles of ALU to spare. There's also best-fit normals, but I'm not a big fan of having to use a lookup texture.

I overlooked the spherical coordinates approach. It seems like a good option to me. 

Until now I transformed my normals to world space when I needed it and of course I could let it that way. But I feel a bit uncomfortable about it, because then it seems it would be preferable to do the typical lighting calculations in view space and only transform to world space when there are no other options. Maybe I could mix both, because I reconstruct both world space and view space positions from exactly the same source. But generally this is exactly what I wanted to avoid in the first space, assuming that it would be a bit clearer to do everything in the same space. Depends a bit on how expensive the spherical coordinates really are.

 

Just looking at world space normals, what do you think is faster: decoding WS normal stored in spherical coordinates or decoding VS normal stored in spheremap and transforming to view space?

 

I also wonder how Epic gets away with it in their demo. Maybe it's really not that apparent with proper geometry/materials. Or do you think the remark "Gaussian Specular for less aliasing" has anything to do with it?

 

Thanks for the replies, that gives me some options to try!


#3B_old

Posted 09 January 2013 - 02:15 AM

You could try storing just x and y to 16-bit channels and reconstruct z. But you need steal one bit of information from other channels to get good results. If you have this extra bit that you could use this would be easy and fast encode/decode scheme with ok quality

I thought about this as well. I prefer to have the normals in a single render target though. This means I have to cram the x,y and the z sign into a single channel. I have not tried it yet, but it should work. Wonder how it will perform.

 

Spherical coordinates will give you pretty good precision in world space, especially in 16-bit. However you'll eat up some ALU cycles on trig instructions. Another option is to simply transform from world space to view space, compress using spheremap, and decompress/transform back to world space when sampling your G-Buffer. This isn't always as bad as you think, especially on GPU's with oodles of ALU to spare. There's also best-fit normals, but I'm not a big fan of having to use a lookup texture.

I overlooked the spherical coordinates approach. It seems like a good option to me. 

Until now I transformed my normals to world space when I needed it and of course I could let it that way. But I feel a bit uncomfortable about it, because then it seems it would be preferable to do the typical lighting calculations in view space and only transform to world space when there are no other options. Maybe I could mix both, because I reconstruct both world space and view space positions from exactly the same source.

 

Just looking at world space normals, what do you think is faster: decoding WS normal stored in spherical coordinates or decoding VS normal stored in spheremap and transforming to view space?

 

I also wonder how Epic gets away with it in their demo. Maybe it's really not that apparent with proper geometry/materials. Or do you think the remark "Gaussian Specular for less aliasing" has anything to do with it?

 

Thanks for the replies, that gives me some options to try!


#2B_old

Posted 09 January 2013 - 02:14 AM

You could try storing just x and y to 16-bit channels and reconstruct z. But you need steal one bit of information from other channels to get good results. If you have this extra bit that you could use this would be easy and fast encode/decode scheme with ok quality

I thought about this as well. I prefer to have the normals in a single render target though. This means I have to cram the x,y and the z sign into a single channel. I have not tried it yet, but it should work. Wonder how it will perform.

 

Spherical coordinates will give you pretty good precision in world space, especially in 16-bit. However you'll eat up some ALU cycles on trig instructions. Another option is to simply transform from world space to view space, compress using spheremap, and decompress/transform back to world space when sampling your G-Buffer. This isn't always as bad as you think, especially on GPU's with oodles of ALU to spare. There's also best-fit normals, but I'm not a big fan of having to use a lookup texture.

I overlooked the spherical coordinates approach. It seems like a good option to me. 

Until now I transformed my normals to world space when I needed it and of course I could let it that way. But I feel a bit uncomfortable about it, because then it seems I would be better of with just doing my typical lighting calculations in view space and only transform to world space when there are no other options. Maybe I could mix both, because I reconstruct both world space and view space positions from exactly the same source.

 

Just looking at world space normals, what do you think is faster: decoding WS normal stored in spherical coordinates or decoding VS normal stored in spheremap and transforming to view space?

 

I also wonder how Epic gets away with it in their demo. Maybe it's really not that apparent with proper geometry/materials. Or do you think the remark "Gaussian Specular for less aliasing" has anything to do with it?

 

Thanks for the replies, that gives me some options to try!


#1B_old

Posted 09 January 2013 - 02:13 AM

You could try storing just x and y to 16-bit channels and reconstruct z. But you need steal one bit of information from other channels to get good results. If you have this extra bit that you could use this would be easy and fast encode/decode scheme with ok quality

I thought about this as well. I prefer to have the normals in a single render target though. This means I have to cram the x,y and the z sign into a single channel. I have not tried it yet, but it should work. Wonder how it will perform.

 

Spherical coordinates will give you pretty good precision in world space, especially in 16-bit. However you'll eat up some ALU cycles on trig instructions. Another option is to simply transform from world space to view space, compress using spheremap, and decompress/transform back to world space when sampling your G-Buffer. This isn't always as bad as you think, especially on GPU's with oodles of ALU to spare. There's also best-fit normals, but I'm not a big fan of having to use a lookup texture.

I overlooked the spherical coordinates approach. It seems like a good option to me. 

Until now I I transformed my normals to world space when I needed it and of course I could let it that way. But I feel a bit uncomfortable about it, because then it seems I would be better of with just doing my typical lighting calculations in view space and only transform to world space when there are no other options. Maybe I could mix both, because I reconstruct both world space and view space positions from exactly the same source.

 

Just looking at world space normals, what do you think is faster: decoding WS normal stored in spherical coordinates or decoding VS normal stored in spheremap and transforming to view space?

 

I also wonder how Epic gets away with it in their demo. Maybe it's really not that apparent with proper geometry/materials. Or do you think the remark "Gaussian Specular for less aliasing" has anything to do with it?

 

Thanks for the replies, that gives me some options to try!


PARTNERS