Jump to content
  • Advertisement
Sign in to follow this  
Nielske

OpenGL OpenGL using PBO vs D3D using Locked_rect

This topic is 3879 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey everybody, I have a videodecoder written in c++ and for the last step (showing on screen) i have 3 choices for the implementation. OpenGL_0 using PBO and glDrawPixels(), OpenGL_1 using PBO for filling a texture and showing it using glQuads and D3D using d3dlocked_rect. Since i've implemented all three, I've plotted the performance. As I would think all these implementations should be the same, since the mayor bottleneck is the videodecoding (which is in all 3 exactly the same code) But now i've plotted the executiontime needed for decoding and showing 300frames for all 3 and what do i notice: For the bigger sequences (higher resolution and/or higher bitrate) OGL and D3D perform around the same. But when I go to qcif with BitRate200, I notice d3d outperform OGL dramatically. I can't seem to find any explanation for this, any of you guys want to help. I will try to upload the chart somewhere but here is the data itself OGL0 OGL1 D3D0 1 8 7.963471 8.299505 2 7.431835 7.251726 7.689281 3 3.994258 3.994181 1.901451 4 1.975796 1.895158 0.277657 5 7.455745 7.201237 7.731317 6 7.994398 7.805569 8.371072 7 3.827756 3.990566 1.932564 8 26.100944 25.43273 27.38777 Sequence 1 2 5 6 8 are heavy sequence (BR>1000 e.g., with seq. 8 720p) But sequence 4 is e.g. qcif with Bitrate 200. And the execution time for d3d outperforms OGL big time. Any help would be appreciated.

Share this post


Link to post
Share on other sites
Advertisement
PBO was invented so that you'll be able to upload/download textures async, in other words, the driver uploads when it finds the best time. In general, the texture will be ready on the second frame, meaning after the SwapBuffers.
The other stuff to verify is the format. All GPUs support GL_BGRA.
The dimension of the texture also can have effect. Probably pow of 2 being the ideal case.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!