# OpenGL Is this a driver bug?

This topic is 3415 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I'm having some troubles with glReadPixels(). I've narrowed it down to this single test case that consistently crashes on start up. I'm creating a 950x620 OpenGL window.
uint32 nViewportWidth = 950;
uint32 nViewportHeight = 620;

// Crashes due to HEAP CORRUPTION; CRT detects application wrote to memory after end of heap buffer
uint8 * pPixels = new uint8[3 * nViewportWidth * nViewportHeight + 950 + 287];

// Doesn't crash
//uint8 * pPixels = new uint8[3 * nViewportWidth * nViewportHeight + 950 + 288];

// This line causes the crash (i.e. commenting it out = no crash)
glReadPixels(0, 0, nViewportWidth, nViewportHeight, GL_RGB, GL_UNSIGNED_BYTE, reinterpret_cast<void *>(pPixels));

delete[] pPixels;

Yet everything seems to work fine when I use GL_RGBA instead of GL_RGB.
uint8 * pPixels = new uint8[4 * nViewportWidth * nViewportHeight - 1]; // Crashes, as expected
//uint8 * pPixels = new uint8[4 * nViewportWidth * nViewportHeight - 0]; // Works fine
glReadPixels(0, 0, nViewportWidth, nViewportHeight, GL_RGBA, GL_UNSIGNED_BYTE, reinterpret_cast<void *>(pPixels));

This has got to be a driver bug, right? Or is there a mistake somewhere in my code? Btw, the magic number that works for GL_RGB (3 * nViewportWidth * nViewportHeight + 950 + 288) is = 1768238 = 2 * 17 * 131 * 397... I have no idea why it works for this number exactly. Extra points to whoever knows why.

##### Share on other sites
The size of a row of the image is 3*950 = 2850 bytes. The default row pack alignment is 4, which means that each row must start on an offset which is a multiple of 4 bytes, but 2850 is not divisible by 4. OpenGL will then adjust the write pointer at the end of each row such that the next row starts on an offset which is a multiple of 4, which means padding each row with 2 bytes. Reading back RGBA means each pixel is 4 bytes, and so the row size is always a multiple of 4, and will always play well with the default alignment.

Either adjust for this padding by rounding the buffer size up so each row is rounded to nearest larger multiple of 4, or set the pack alignment to 1. See glPixelStore.

##### Share on other sites
Something's fishy here.

No, it's not the graphics card driver after all; I've tried running the same exe on another computer (with a completely different video card, made by another company) and I'm getting the *exact* same behaviour.

My next guess is it's related to some of the libraries I'm using. Perhaps one of them overrides new[] operator with some memory management agenda, something I'm not aware of.

Another weird thing is that my VC++ 2008 EE project is supposed to be producing an .exe only, but it's also creating a .lib and .exp file for some reason. I have no idea why.

1> Creating library C:\project\Debug\cse4431Project.lib and object C:\project\Debug\cse4431Project.exp

##### Share on other sites
Quote:
 Original post by Brother BobThe size of a row of the image is 3*950 = 2850 bytes. The default row pack alignment is 4, which means that each row must start on an offset which is a multiple of 4 bytes, but 2850 is not divisible by 4. OpenGL will then adjust the write pointer at the end of each row such that the next row starts on an offset which is a multiple of 4, which means padding each row with 2 bytes. Reading back RGBA means each pixel is 4 bytes, and so the row size is always a multiple of 4, and will always play well with the default alignment.Either adjust for this padding by rounding the buffer size up so each row is rounded to nearest larger multiple of 4, or set the pack alignment to 1. See glPixelStore.

Ah, of course. I was thinking of something along those lines (memory alignment), but I didn't realize it was realigning itself with the memory on a per-row basis.

3*950 = 2850
ceil(2850/4) * 4 = 2852

2852 * 620 rows = 1768240 bytes

I guess 1768237 was getting rounded down to the nearest 4 byte multiple of 1768236, and 1768238 was thought to be 1768240.

I was actually looking at the Prime factorization of 1768240, but I couldn't figure out the pattern.

Thanks a lot Brother Bob! :D

P.S. If anyone knows why my project is producing .lib/.exp files, that'd be helpful too. But that's a secondary issue.

##### Share on other sites
While we are on the subject, the man page for glReadPixels has the following sentence:

glReadPixels returns values from each pixel with lower left corner at (x + i, y + j) for 0<i<width and 0<j<height.

Is there a typo/mistake in there? I would assume it should read:

glReadPixels returns values from each pixel with lower left corner at (x + i, y + j) for 0<=i<width and 0<=j<height.

Am I correct?

Especially since:

width, height - Specify the dimensions of the pixel rectangle. width and height of one correspond to a single pixel.

1. 1
2. 2
Rutin
21
3. 3
4. 4
5. 5
frob
13

• 9
• 13
• 10
• 9
• 17
• ### Forum Statistics

• Total Topics
632601
• Total Posts
3007349

×