For instance, I own a Gigabyte Geforce GTX 660 and in the technical specifications that I've found say the memory bandwidth is 144.2 GB/S but my question is: in this case, 1 GB = 100,000,000 bytes or 1GB = 2^30 bytes ?
I thought it was the former, but my profiler says that my application reachs maximum speeds of 155.83 GB/S. My profiler could be wrong too. That's why I would like to know if I should change the metric used for the performance calculation or if somehow my gigabyte geforce gtx 660 is better than I hoped or if my profiler needs to be checked. I really hope it's not the last option hehe.