I'm looking for an efficient way to get hammersley distribution points on the GPU. I found this page with some GLSL code that relies on bit-shifts. Is this efficient on the GPU? Are you aware of more efficient implementation? Is it better to create a look-up-table?
Efficient Hammersley distribution on the GPU
I'm looking for an efficient way to get hammersley distribution points on the GPU. I found this page with some GLSL code that relies on bit-shifts. Is this efficient on the GPU? Are you aware of more efficient implementation? Is it better to create a look-up-table?
On Nvidia hardware I think bitwise shifts are probably going to be the most expensive integer operation, and integer operations are already slightly more expensive than floating point. For instance, a GTX 760 is going to be able to handle a theoretical 188 billion integer shifts per second, compared to a theoretical 2.25 trillion floating point operations per second.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement