GPU precision question

Started by
1 comment, last by Jengu 16 years, 1 month ago
I'm setting up a class for plots/graphs and using OpenGL to render. The idea is a big list of 3D vertices made of three doubles will be generated, and then the plot class will take the list and render it as a 3D plot. The vertices will be made of doubles, but since GPU's really only have 32-bit floats to work with, I'm wondering how to set things up with the least precision problems. My best thought so far is that for a 3D plot, I do glOrthod(xmin, xmax, ymin, ymax, zmin, zmax) where the x/y/z min/max variables specify the range to actually be plotted (and then call glVertex3d for each of the points to plot them). I figure this means the 3D space most exactly matches the data, so there will be less error. But because I'm ignorant of how OpenGL works, I don't know if that's actually true. I figure if I used a different range and then converted to that in software that would introduce some error, but maybe the GPU is doing this anyway? I'm not a floating point expert either, so maybe there's some range that would be best to use for the 3D space regardless of the range of the data?
Advertisement
I don't have answers, but I do have a question: why do you use double for vertices in the first place? Have you already tested floats and decided that they don't provide enough precision?
Yeah, in this case it's desirable to keep as much precision information as possible intact because of uses outside the plotting. At some point in the future I might switch over to using an arbitrary precision type too.

This topic is closed to new replies.

Advertisement