Jump to content
  • Advertisement
Sign in to follow this  
AGenereux

OpenGL Video in OpenGL - Problems

This topic is 4021 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello all, I've been working on a way of displaying live video on Linux. One of the big problems with this is that video comes in at YUV420 format, which is a planar luminance/chrominance format used for compressed video. Displaying this to the screen requires converting from YUV to RGB which involves a lot of floating-point (or at the very least fixed-point) calculations. I wanted to come up with a faster way of doing it, so I looked into using OpenGL to do color conversions in hardware. My test machine is a laptop with an NVIDIA GeForce Go 6600, with the 100.14.11 NVIDIA driver. Essentially, my solution does the following: 1. Set up the OpenGL environment to use an orthographic projection of a fixed size equal to the width and the height of the video. 2. Generate a generic RGB texture that will never actually be seen (basically just all black). 3. Set the OpenGL color matrix to convert YUV to RGB. 4. Get video one frame at a time in YUV420 format and convert it into 32-bit straight YUV format so that it can be run through the color matrix properly. 5. Replace the appropriate part of the texture with the new frame data using glTexSubImage. 5. Draw a rectangle with corners to match the projection so that it fills the entire window. Use the newly written texture so that the rectangle is textured with the video frame. The video looks perfect, so it's basically working. However, the CPU usage is considerably higher than I thought it would be. From what I understand, the texture copying will inevitably take up some resources, and it seems to take about 15% CPU. However, the color matrix transformation takes up 20%. Also, the data conversion from YUV420 to basic YUV is taking up huge amounts of CPU, close to 50%. My questions are thus: 1. The color matrix usage seems high to me given that from my understanding of hardware acceleration matrix transformations should essentially be done entirely on the GPU. Is this not correct? Is there a way to find out whether the hardware is even being used? I can't think of a reason that OpenGL would be running in software, but I really can't tell. 2. I know next to nothing about shaders apart from the fact that they exist, so this kind of a shot in the dark. That being said, I was thinking that maybe I could use a shader to take in unmodified, planar YUV420 data and do the conversion on RGB directly from that in the shader. Is such a thing even possible? Would I actually get any benefit from doing it this way, or would I just bog down the card? Bare in mind that this would involve using data from several separate locations to calculate a pixel, rather than just converting one pixel to another. I don't actually need specifics on how to do this, I just want to know if trying would be a waste of time. I's appreciate whatever help I could get on this issue. Thanks, Andre

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by AGenereux
1. The color matrix usage seems high to me given that from my understanding of hardware acceleration matrix transformations should essentially be done entirely on the GPU. Is this not correct? Is there a way to find out whether the hardware is even being used? I can't think of a reason that OpenGL would be running in software, but I really can't tell.
The COLOR matrix has never really known widespread use. I don't think it's HW-accelerated in the first place - I have no proof of this, although your measurements are quite a strong hint.
Quote:
Original post by AGenereux
2. I know next to nothing about shaders apart from the fact that they exist, so this kind of a shot in the dark. That being said, I was thinking that maybe I could use a shader to take in unmodified, planar YUV420 data and do the conversion on RGB directly from that in the shader. Is such a thing even possible?
Yes, there's a .FX file you can adapt to GL in NVSDK. I must admit I don't know if it does exactly what you need but I guess it's worth a try.
I fear you'll have to download NVSDK completely because the online version looks trimmed down (the included files are not accessible). :-(
Quote:
Original post by AGenereux
Would I actually get any benefit from doing it this way, or would I just bog down the card? Bare in mind that this would involve using data from several separate locations to calculate a pixel, rather than just converting one pixel to another. I don't actually need specifics on how to do this, I just want to know if trying would be a waste of time.

For experience, the FX shader provided by NVIDIA runs fast on my system, which is considered low-to-mid range by today's standards!
I believe it will be almost for free. Consider tonemapping is a similar operation involving 3 different textures and tens of pixels easily so it should be nothing really hard for the GPU.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!