Jump to content

  • Log In with Google      Sign In   
  • Create Account


We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.

Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!


Member Since 15 Apr 2011
Offline Last Active Jan 20 2012 08:49 AM

Posts I've Made

In Topic: 16 Bit Non-Clamped Texture

14 January 2012 - 05:15 AM

Let it be known! I've spent a few hours and come to the following conclusion. Someone please correct me if I am wrong:

OpenGL clamps values IF they're provided in the GL_SHORT format (and perhaps other non-float formats). It does not matter if you want them represented at 16 bit float or 32 bit float internally. They WILL be clamped between 0 and 1.

In other words, to upload actual NON-clamped data, stick to this upload code:

glTexImage3D(GL_TEXTURE_3D, 0, GL_LUMINANCE16F_ARB, tWidth, tHeight, tDepth, 0, GL_LUMINANCE, GL_FLOAT, ptexels);

Note that the values MUST be provided in float...

Gazoo out...

In Topic: 10 Bit textures and bit-shifting on the GPU

30 December 2011 - 12:06 PM

Super cool awesome bananas feedback on the topic. Love how helpful people are on this forum!

Samith - good point with taking care on how the data is converted..!

mhagain - Wow... I didn't even know that specific 10 bit formats existed, but given that I probably need at least 16 bits, it'll be mostly a feature I'll use depending on my curiosity.

V-man - Useful to know that the format exists... I'm curious though... Is there a reason that that particular internal format is not mentioned on the follow page:


In Topic: Delayed glut reshape callback

26 December 2011 - 08:49 AM

That's a good idea... I suppose I could set a bool in the reshape function and then test that when mouse up triggers...

It turns out I was a fool and was resizing textures that did not exist. Although behavior is undefined, I still stand by my previous statement saying that my machine sucks...

Thank you for the suggestion!

In Topic: Heap Corruption ahoy

25 July 2011 - 04:11 AM


Thank you for the replies. I'm using an external library, so unfortunately I cannot rely on an std::vector. But I have done some good by moving a large portion of the memory allocation from the heap to the stack by turning things into references instead of using pointers.

My crashing still seems to wander back and forth between code segments. However, right now it seems to occur a lot when the stringstream object is automatically brought out of scope. I read a bit around on the net and found one of the most common problems is returning objects that immediately go out of scope, like

return myStringstream.str().c_str()

BUT! All of my calls to the aforementioned double function happen when calling another function that takes a cstring. My question is, shouldn't the objects still be in scope at that point? Lookey code example here:

std::stringstream myStringstream;


Is that bad? Do I still need to allocate memory on the heap for the final cstring and copy it there while the function runs?


In Topic: FBO depth and color to texture

04 July 2011 - 06:24 AM

Ok - After creating an individual project and messing about a bit with the FBO I've become a little more wise. I'm not sure if any of the stuff I'm about to say hasn't been mentioned in other FBO tutorials, but I've missed them so I'm sure it cannot hurt to cover again:

  • If no depth attachment is made - seems to be regardless of whether to renderbuffer or otherwise (glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, glsl_tex_depthbuffer, 0);) then the resulting render to the FBO will not produce a depth-tested image. Even if GL_DEPTH_TEST is enabled. It also doesn't seem to matter whether or not you actually attach the depth extention through drawbuffers... O_o go figure. Perhaps OpenGL just always renders depth if it's extension is attached?
  • glDrawBuffers only likes color attachments, such as GL_COLOR_ATTACHMENT0_EXT, and not GL_DEPTH_ATTACHMENT_EXT...
  • Detaching mid-render (glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0)) causes OpenGL errors... Best to set up each individual scene. Modelview and projection can propogate across multiple renders.
Especially the last bullet point seemed to be what was causing me grief. When I had various things glEnabled, then detaching the FBO seemed to make OpenGL grumpy. But irregardless, it seemed that setting up isolated renders fixed the issue I was having... There are still a few things i am wondering about.

I keep reading that it's faster to attach and detach textures rather than switching between different FBO's. But what about switching between color attachment points, and de/re -attaching to the same color attachment? I am rendering two passes and into two separate color textures.

Just thought I'd ask if someone had any experience with this kind of thing...