Gazoo101

Members
  • Content count

    24
  • Joined

  • Last visited

Community Reputation

100 Neutral

About Gazoo101

  • Rank
    Member
  1. Dear Forum, I've come across a curious issue that I hope someone with more experience can help me understand properly. I've been visualizing some volume data (on two nvidia 280m cards) and lately I decided to change the on-board representation of the data with some (at least to me) surprising results. I originally represented the data in shorts (16 Bits), which was re-scaled to fit the GPU's architecture of traditional 0-1 range. Rendering the data using simple ray casting and hardware based trilinear interpolation yielded the image "short trilinear.jpg" (on the left (or top)). Kind of blocky, but expected. Then, I got my hand on some more data, and certain inconsistencies between the new and old data caused me to look for a way to avoid rescaling. 16-bit floats to the rescue. After a little fisticuffs with my engine, it saw the error of its ways and voila "float trilinear.jpg" (on the right (or bottom)) is the result. Notice anything different? I sure do... In fact, I'm kind of astonished and puzzled as to what is the cause of it. I have a few guesses as to what Could be the cause, but I'd love for some feedback on the issue. Here are my thoughts: The data is obviously more detailed and better interpolated when visualizing the 16-bit floating point data. Causes could be...[list] [*]Possible loss of precision due to re-scaling between 0 and 32767. Potentially, not all bits are being used since I upload the values with signed precision in both cases, and the original values span from about -3012 to 4012. But they are scaled to fit the entire range of 0 to 32767. So I fail to see how the loss of precision would be so high as to produce such a visually different result? [*]Interpolation precision changes due to data representation? The data on the GPU is represented as integers in one case, and in floats as the other. Perhaps graphics hardware just interpolates and approximates to the same kind of value? I would think that the same floating point registers are used for trilinear interpolation, regardless of the original type of data. [*]16 bit floats have higher precision surrounding 0 and a more sparse precision the further out the values go. Perhaps that has an effect in this case? I'd say my value concentration is the densest from -350 to 800 (when looking at the original data). [/list] If anyone has any input or can say "that one idea is definately out", then I'd much appreciate it! Even more so if you have a source stating it... Regards, Gazoo
  2. 16 Bit Non-Clamped Texture

    Let it be known! I've spent a few hours and come to the following conclusion. Someone please correct me if I am wrong: OpenGL clamps values IF they're provided in the GL_SHORT format (and perhaps other non-float formats). It does not matter if you want them represented at 16 bit float or 32 bit float internally. They WILL be clamped between 0 and 1. In other words, to upload actual NON-clamped data, stick to this upload code: [CODE] glTexImage3D(GL_TEXTURE_3D, 0, GL_LUMINANCE16F_ARB, tWidth, tHeight, tDepth, 0, GL_LUMINANCE, GL_FLOAT, ptexels); [/CODE] Note that the values MUST be provided in float... Gazoo out...
  3. Ok Dudes, I've searched for an hour and still I cannot figure out what I am doing wrong. I want to make a simple 16 bit float 3D texture that is not clamped and I'm trying to accomplish this via this call: [CODE] glTexImage3D(GL_TEXTURE_3D, 0, GL_LUMINANCE16F_ARB, dims[0], dims[1], dims[2], 0, GL_LUMINANCE, GL_SHORT, volumeData); [/CODE] The code runs fine, and yet my values are still clamped to 0-1. I must be missing something, or perhaps one of the types are incorrect, but I'm at a loss. Is there perhaps some call to an enable function that must preceed the usage of non-clamped textures? As far as I can tell it should work with 3D textures just fine... Gazoo
  4. Super cool awesome bananas feedback on the topic. Love how helpful people are on this forum! Samith - good point with taking care on how the data is converted..! mhagain - Wow... I didn't even know that specific 10 bit formats existed, but given that I probably need at least 16 bits, it'll be mostly a feature I'll use depending on my curiosity. V-man - Useful to know that the format exists... I'm curious though... Is there a reason that that particular internal format is not mentioned on the follow page: [url="http://www.opengl.org/sdk/docs/man/xhtml/glTexImage3D.xml"]http://www.opengl.org/sdk/docs/man/xhtml/glTexImage3D.xml[/url]
  5. Dear Forum, I've found myself in an odd situation where space-conservation is of the utmost priority. I currently have a large amount of 2D vectors expressed in floats (x and y value). These floats are placed in a texture and uploaded to the GPU for usage in a GLSL program. An easy way to minimize space is to compress the data down to 8 bits per component. So x and y in 8 bits. This provides ok results, but there's some loss of precision that I'd like to avoid or at least mitigate. It occurred to me that by expressing the vector in polar form, I can more accuratly determine where I need precision the most. In the direction or the length of the vector. Consequently, I'd like to try and use 10 bits for the direction, and 6 bits for the length. But I am at a loss if this is even possible on the GPU? If anyone has any idea or feedback, I'd be very grateful. This is what I know so far: [list][*]I'm uncertain if the GLSL specifications allow for textures consisting of types that use 10 or 6 bits. Even if it does, it seems as though it might be very inefficient, due to it being quite non-standard...[*]The GPU now supports bit-wise operations. So perhaps I could upload the values combined in a 16 bit texture and then use bit-shifting operations to put them into two separate 16 bit varaibles? What do you think?[/list] Regards, Gazoo
  6. Delayed glut reshape callback

    That's a good idea... I suppose I could set a bool in the reshape function and then test that when mouse up triggers... It turns out I was a fool and was resizing textures that did not exist. Although behavior is undefined, I still stand by my previous statement saying that my machine sucks... Thank you for the suggestion!
  7. Hey hey hey (or ho ho ho as the case may be), I'm running a semi simple, yet convoluted graphics engine with some FBO's n textures and voxels and bits n thingy-ma-bobs... The point is, I have a bunch of textures that need re-sizing whenever I reshape the window. Unfortunately the computer I have is kind of crappy and although I'm not sure constantly re-sizing textures is good practice, it has a tendency to crash my computer. Completely. As in, it just comes to a grinding halt. Anyway - the easiest course of action for me to take to avoid this nasty crashing business, is to make sure all this resizing stuff happens once, when the user has stopped resizing the window. I was wondering if anyone has a good idea as to how this might be accomplished in a neat and not-too-hacky way. Ideally, I'd love for a bool to be set once the user has let the window go, and then the textures all get resized. I'm not a big fan of polling periodically to make sure that the user is no longer fiddling with the window. Any good ideas? Gazoo P.s No coal plz...
  8. Heap Corruption ahoy

    Heyo! Thank you for the replies. I'm using an external library, so unfortunately I cannot rely on an std::vector. But I have done some good by moving a large portion of the memory allocation from the heap to the stack by turning things into references instead of using pointers. My crashing still seems to wander back and forth between code segments. However, right now it seems to occur a lot when the stringstream object is automatically brought out of scope. I read a bit around on the net and found one of the most common problems is returning objects that immediately go out of scope, like [code] return myStringstream.str().c_str() [/code] BUT! All of my calls to the aforementioned double function happen when calling another function that takes a cstring. My question is, shouldn't the objects still be in scope at that point? Lookey code example here: [code] std::stringstream myStringstream; afunc(myStringstream.str().c_str()); [/code] Is that bad? Do I still need to allocate memory on the heap for the final cstring and copy it there while the function runs? Gazoo
  9. Hey All, My program crashes violently when I perform a delete [] command on an existing array. Originally, it only crashed when completely detached from a debugger (using Visual Studio 2010 btw). At that point I googled and found a bunch of recommendations for application verifier. I downloaded that but found that when I attached it, the program no longer crashed and then verifier reported no errors... However, after fiddling a bit more in Vs2010, I got my program to crash in release mode with the debugger attached. I thought I was clever when printing where the array pointer I am deleting points to, as I thought it would get overwritten and thus cause a crash when being deleted. No such luck... I'm pretty confused right now. The array is a size of 1, never changes address, or at least - not from it's created all the way up to when it's deleted, and yet delete [] still causes an access violation... Anyone have any ideas of what might help me pinpoint the problem? Regards, Gazoo
  10. Ok - After creating an individual project and messing about a bit with the FBO I've become a little more wise. I'm not sure if any of the stuff I'm about to say hasn't been mentioned in other FBO tutorials, but I've missed them so I'm sure it cannot hurt to cover again: [list][*]If no depth attachment is made - seems to be regardless of whether to renderbuffer or otherwise (glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, glsl_tex_depthbuffer, 0);) then the resulting render to the FBO will not produce a depth-tested image. Even if GL_DEPTH_TEST is enabled. It also doesn't seem to matter whether or not you actually attach the depth extention through drawbuffers... O_o go figure. Perhaps OpenGL just always renders depth if it's extension is attached?[*]glDrawBuffers only likes color attachments, such as GL_COLOR_ATTACHMENT0_EXT, and not GL_DEPTH_ATTACHMENT_EXT...[*]Detaching mid-render (glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0)) causes OpenGL errors... Best to set up each individual scene. Modelview and projection can propogate across multiple renders.[/list] Especially the last bullet point seemed to be what was causing me grief. When I had various things glEnabled, then detaching the FBO seemed to make OpenGL grumpy. But irregardless, it seemed that setting up isolated renders fixed the issue I was having... There are still a few things i am wondering about. I keep reading that it's faster to attach and detach textures rather than switching between different FBO's. But what about switching between color attachment points, and de/re -attaching to the same color attachment? I am rendering two passes and into two separate color textures. Just thought I'd ask if someone had any experience with this kind of thing...
  11. Hey bluntman, I'm trying to run gDEBugger - but this tool is new to me. It took me an hour just to deduce that the two versions 5.8 and 6.0 are vastly different. Each present their own challenges. The 5.8 version has trouble finding my source code files when I run it on my debug build. Not surprising I suppose as it doesn't know where to look. The other version integrates directly into Vs2010 which is great (and terrible), except for the fact that AMD has had its mits on it and kernel mode is disabled for non-amd gpu's. I'm hazy on the details but I think I can live without kernel debugging, but to complicate matters just a little further, this version does not feature any break-on-error. I have no idea why not. Probably because some new features supersede the functionality, but reading through their manual has brought me no closer to figuring out how to work the damn thing. One of the errors detected by the 5.8 gdebugger is a bind texture error. Not surprising as the bind call has a texture id of several thousand. Tracking this in the 6.0 version sends me to some source that cannot be opened. When I instead step through every line I see that after unbinding the FBO when setting it up for the first time, I end up in code I cannot see and there at least one improper bind occurs... I have to figure this out... Thanks for the continued help thou bluntman... Be blunt ;) Gazoo
  12. I'm still working on seeing if I can somehow isolate the cause - but right now the only thing I am sure of is that the program absolutely refuses to render into the FBO when I attach something to the GL_DEPTH_ATTACHMENT_EXT. I can define drawing into the GL_DEPTH_ATTACHMENT_EXT using the glDrawBuffers command without any trouble, but as soon as I attach a texture to receive the depth information the whole pipeline goes belly up... So far, most of the examples I've seen on the net attach GL_DEPTH_ATTACHMENT_EXT to a renderbuffer and not a texture. I have not read anywhere that this is explicitly required, but I'm starting to wonder if it's even possible to write the color and depth information simultaneously to two separate textures... Gazoo
  13. Agreed... Any other ideas as to why I am getting this strange behavior?
  14. Hellew Ladies and Gentlemen, Here's the deal. I have a bunch of existing code that uses the FBO to render color information into an RGB texture. I'm slowly expanding what the FBO has to render to, since I'm improving the functionality of the program, and what triggered me to write this post is the following issue: [code] /*** * Create FBOs and textures to render to... */ // Generate OpenGL iTem'z glGenTextures(1, &glsl_tex_p1); glGenTextures(1, &glsl_tex_colorbuffer); glGenTextures(1, &glsl_tex_depthbuffer); glGenFramebuffersEXT(1, &glsl_FBO); glBindTexture(GL_TEXTURE_2D, glsl_tex_p1); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB32F_ARB, WINX, WINY, 0, GL_RGB, GL_FLOAT, 0); glBindTexture(GL_TEXTURE_2D, glsl_tex_colorbuffer); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB32F_ARB, WINX, WINY, 0, GL_RGBA, GL_FLOAT, 0); glBindTexture(GL_TEXTURE_2D, glsl_tex_depthbuffer); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, WINX, WINY, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT, glsl_FBO); glFramebufferTexture2DEXT(GL_DRAW_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, glsl_tex_p1, 0); // Pre-attach relevant textures glFramebufferTexture2DEXT(GL_DRAW_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT1_EXT, GL_TEXTURE_2D, glsl_tex_colorbuffer, 0); //glFramebufferTexture2DEXT(GL_DRAW_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, glsl_tex_depthbuffer, 0); if(glCheckFramebufferStatusEXT(GL_DRAW_FRAMEBUFFER_EXT) != GL_FRAMEBUFFER_COMPLETE_EXT) std::cout << "Something wrong with FBO" << std::endl; glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT, 0); [/code] Everything is peachy until I attached what was supposed to be the depth texture (The line commented out in the code above), which caused the existing usage of the FBO to spaz out. At first, I was puzzled, because in my mind I'm only attaching textures. The commands that determine which buffers to write into remain unchanged. I've cruised the internet and there are a number of reasons I can find that are potentially to blame: 1) Textures attached to an FBO must have the same size. This requirement (as I understand it) is no longer set in stone on newer hardware. Instead of outright failing, the call will take much longer to accommodate the difference in texture sizes. I was also under the impression that this requirement not only refers to height and width of the texture, but also the internal format. If that is the case, it's a bit strange that I can attach an additional color output with a different internal representation (GL_RGBA instead of GL_RGB) and the FBO doesn't return an error. 2) That's it... I thought I had a second reason, but I don't... I'm rendering some heavy duty graphics which are static when untouched and I wish to conserve power and performance by only rerendering it, when it actually moves. The rest of the time, I'd prefer to just update the screen from color and depth information. While I have your attention, I am also a bit confused about how the depth buffer requires being rendered into a renderbuffer. I've read that if the color buffer is to be updated properly using depth testing, the depth buffer must be rendered into a renderbuffer attached to the FBO. I suppose this makes sense as the FBO more or less replaces the window assigned buffers. But if I then want to reuse the depth information, should I read it using glreadpixels or run another rendering pass with an FBO rendering it to a texture (which I need it as)? This of course assumes that it's not possible to render both color and depth into individual textures simultaneously, which I hope it is... I hope someone with more wisdom can drop some knowledge on this thread... Gazoo
  15. Hey Forum, I'm in a bit of a fix... I'm having trouble creating and binding a 3D texture of an irregular size. I have no problem creating other 3D textures of irregular size, such as 256 width, 480 height and, 134 depth. But when I create a texture with the following code with an irregular size, I get an access violation error... [code] GLuint tempTex; glGenTextures(1, &tempTex); glBindTexture(GL_TEXTURE_3D, tempTex); /**** * glTexImage3D Debuggin'z */ unsigned int int_format2 = GL_LUMINANCE; GLenum format2 = GL_LUMINANCE; GLenum data_type2 = GL_SHORT; short* vol2 = new short[199 * 442 * 56]; //short* vol2 = new short[vol_dim[0]*vol_dim[1]*vol_dim[2]*vol_dim[3]]; glTexImage3D(GL_TEXTURE_3D, 0, int_format2, 199, 442, 56, 0, format2, data_type2, vol2); delete [] vol2; [/code] I'm certain there must be some simple mistake I'm overlooking, but I've simply been unable to spot it as of now... The only alternative is that OpenGL perhaps dislikes really irregular textures or that I'm overlooking some sort of specification...? Running on SLi 280 GeForce Mobile cards btw... Regards, Gazoo