Reading large texture using glreadpixels solution

Started by
2 comments, last by AhmedCoeia 11 years, 2 months ago

I'm using Opengles on ubuntu on embedded platform. I'm trying to do some computer vision on a large texture 1024*1024. I have done some shaders on a larage texture that is coming from a camera, then I wanted to read back the pixels to do some manuiplations.

Using glreadpixels is slow to read the texture that is generated by the shader, so I proposed to copy the whole large texture from the camera to a small quad using an FBO, then use glreadpixesl to read the pixels from that quad, then do the processing on that quad. If I want to show the result, convert again that FBO to a large one.

Basically the idea is to reduce the 1k*1k texture to small quad, do the processing, then render it back to 1k*1k.

Would that solution work ?

Advertisement

One of the problem i can see in this problem is that your textures will be down sampled when you apply 1024x1024 to a small quad. If you do not mind losing some information from the texture then that is fine. But if every pixel in that 1k*1k texture is important, then what you are trying to do is not going to work. Because as you downscale you lose information, so when you upscale it back to it's original size you are not going to get back the information you lose. Once it is is gone it is gone.

Can the processing be done on the GPU? If so there is no need whatsoever to read it back (and the processing itself may well be faster too).

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

I just wanna do some thresholding, and few computer vision technique using fragments shader, then get the resultant in a texture and do few thing like find_contours,..etc which are done on the CPU, so I need the whole texture :/

This topic is closed to new replies.

Advertisement