Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Gazoo101

Member Since 15 Apr 2011
Offline Last Active Jan 20 2012 08:49 AM

Topics I've Started

Explination for qualitative different interpolation

18 January 2012 - 07:25 AM

Dear Forum,

I've come across a curious issue that I hope someone with more experience can help me understand properly. I've been visualizing some volume data (on two nvidia 280m cards) and lately I decided to change the on-board representation of the data with some (at least to me) surprising results.

I originally represented the data in shorts (16 Bits), which was re-scaled to fit the GPU's architecture of traditional 0-1 range. Rendering the data using simple ray casting and hardware based trilinear interpolation yielded the image "short trilinear.jpg" (on the left (or top)). Kind of blocky, but expected.

Then, I got my hand on some more data, and certain inconsistencies between the new and old data caused me to look for a way to avoid rescaling. 16-bit floats to the rescue. After a little fisticuffs with my engine, it saw the error of its ways and voila "float trilinear.jpg" (on the right (or bottom)) is the result.

Notice anything different? I sure do... In fact, I'm kind of astonished and puzzled as to what is the cause of it. I have a few guesses as to what Could be the cause, but I'd love for some feedback on the issue. Here are my thoughts:

The data is obviously more detailed and better interpolated when visualizing the 16-bit floating point data. Causes could be...
  • Possible loss of precision due to re-scaling between 0 and 32767. Potentially, not all bits are being used since I upload the values with signed precision in both cases, and the original values span from about -3012 to 4012. But they are scaled to fit the entire range of 0 to 32767. So I fail to see how the loss of precision would be so high as to produce such a visually different result?
  • Interpolation precision changes due to data representation? The data on the GPU is represented as integers in one case, and in floats as the other. Perhaps graphics hardware just interpolates and approximates to the same kind of value? I would think that the same floating point registers are used for trilinear interpolation, regardless of the original type of data.
  • 16 bit floats have higher precision surrounding 0 and a more sparse precision the further out the values go. Perhaps that has an effect in this case? I'd say my value concentration is the densest from -350 to 800 (when looking at the original data).
If anyone has any input or can say "that one idea is definately out", then I'd much appreciate it! Even more so if you have a source stating it...

Regards,

Gazoo

16 Bit Non-Clamped Texture

13 January 2012 - 02:59 PM

Ok Dudes,

I've searched for an hour and still I cannot figure out what I am doing wrong. I want to make a simple 16 bit float 3D texture that is not clamped and I'm trying to accomplish this via this call:

glTexImage3D(GL_TEXTURE_3D, 0, GL_LUMINANCE16F_ARB,
	dims[0],
	dims[1],
	dims[2], 0,
	GL_LUMINANCE, GL_SHORT, volumeData);

The code runs fine, and yet my values are still clamped to 0-1. I must be missing something, or perhaps one of the types are incorrect, but I'm at a loss. Is there perhaps some call to an enable function that must preceed the usage of non-clamped textures?

As far as I can tell it should work with 3D textures just fine...

Gazoo

10 Bit textures and bit-shifting on the GPU

26 December 2011 - 09:47 AM

Dear Forum,

I've found myself in an odd situation where space-conservation is of the utmost priority. I currently have a large amount of 2D vectors expressed in floats (x and y value). These floats are placed in a texture and uploaded to the GPU for usage in a GLSL program.

An easy way to minimize space is to compress the data down to 8 bits per component. So x and y in 8 bits. This provides ok results, but there's some loss of precision that I'd like to avoid or at least mitigate.

It occurred to me that by expressing the vector in polar form, I can more accuratly determine where I need precision the most. In the direction or the length of the vector. Consequently, I'd like to try and use 10 bits for the direction, and 6 bits for the length. But I am at a loss if this is even possible on the GPU? If anyone has any idea or feedback, I'd be very grateful. This is what I know so far:

  • I'm uncertain if the GLSL specifications allow for textures consisting of types that use 10 or 6 bits. Even if it does, it seems as though it might be very inefficient, due to it being quite non-standard...
  • The GPU now supports bit-wise operations. So perhaps I could upload the values combined in a 16 bit texture and then use bit-shifting operations to put them into two separate 16 bit varaibles? What do you think?
Regards,

Gazoo

Delayed glut reshape callback

25 December 2011 - 08:19 AM

Hey hey hey (or ho ho ho as the case may be),

I'm running a semi simple, yet convoluted graphics engine with some FBO's n textures and voxels and bits n thingy-ma-bobs... The point is, I have a bunch of textures that need re-sizing whenever I reshape the window. Unfortunately the computer I have is kind of crappy and although I'm not sure constantly re-sizing textures is good practice, it has a tendency to crash my computer. Completely. As in, it just comes to a grinding halt.

Anyway - the easiest course of action for me to take to avoid this nasty crashing business, is to make sure all this resizing stuff happens once, when the user has stopped resizing the window. I was wondering if anyone has a good idea as to how this might be accomplished in a neat and not-too-hacky way.

Ideally, I'd love for a bool to be set once the user has let the window go, and then the textures all get resized. I'm not a big fan of polling periodically to make sure that the user is no longer fiddling with the window.

Any good ideas?

Gazoo

P.s No coal plz...

Heap Corruption ahoy

24 July 2011 - 03:45 AM

Hey All,

My program crashes violently when I perform a delete [] command on an existing array. Originally, it only crashed when completely detached from a debugger (using Visual Studio 2010 btw). At that point I googled and found a bunch of recommendations for application verifier. I downloaded that but found that when I attached it, the program no longer crashed and then verifier reported no errors...

However, after fiddling a bit more in Vs2010, I got my program to crash in release mode with the debugger attached. I thought I was clever when printing where the array pointer I am deleting points to, as I thought it would get overwritten and thus cause a crash when being deleted.

No such luck... I'm pretty confused right now. The array is a size of 1, never changes address, or at least - not from it's created all the way up to when it's deleted, and yet delete [] still causes an access violation...

Anyone have any ideas of what might help me pinpoint the problem?

Regards,

Gazoo

PARTNERS