How draws OpenGL to a Window Handle?

Started by
8 comments, last by Lunatix 11 years, 7 months ago
Hello!

I would like to know, how a graphics api like opengl finally draws to a window's handle.. i know what a pixelformatdescriptor or a HDC is, but my question is - how draws opengl the final "pixels" to the control / handle, whatever? I downloaded the Mesa GL source code and i'm trying to find something.. but maybe, someone knows a sample or something or another solution for my problem?
Advertisement
The easy (although not helpful) answer is "it doesn't".

Bringing the actual pixels on screen is not the task of OpenGL, the actual screen surface (and, in modern times, the entire GPU memory) is owned by the window manager. Once upon a time, the IHV driver used to own everything, but with the advent of compositing desktop managers, even the drivers don't own the resources any more (gotta say thank you to Windows Vista for that mess, which started that thing).

What OpenGL does is, it draws to a buffer which it owns by courtesy of the window manager, and finally it kindly asks for this buffer to be presented.
Thank you :) This is a good approach / hint! Now, i cann turn on google with a more specific "knowledge" of my problem ;)
Think i will get this working.. because i'd like to get "direct" access to a buffer, without calling gdi or using bitmaps for my raytracer.. such a "step in the middle" costs to much time.

Thanks :)
You will never get "direct" access to the actual frame buffer that is displayed on the screen, there will always be "some" step in between.

That last copy (or two) will not be a big impact on a raytracer though, since the actual raytracing would be much more heavy.
I'd recommend you use SDL, set up a 2D framebuffer and draw to that, it should be plenty fast enough.

Its possible you could get slightly more direct access using DirectX api:s.
Write your own GPU driver or bootable software, that way you can access all the buffers you want! But seriously, windows gives you GDI and DirectX (and OpenGL) just for these kinds of things, everyone uses them. The big rendering studios, too. When you upload a texture to VRAM, it'll be rendered pretty much as directly as it can.
Very funny, powly K ;)

I don't wanted those kind of "direct" access, it was more a question, how OpenGL or DirectX brings the final pixels on the window, just to understand this technique. I tried something like: creating a panel and setting a bitmap as background image, then write to that image - this operation costs ~400ms (C#, my first raytracing approach was written in this, but now, i ported the code in C++ as a library for more speed). And because of this lack of performance, i thought i could get a bit more "direct" access..
It actually is the way. OpenGL and DirectX are implemented as drivers for the GPU. All drawn things go through the GPU nowadays, so you can't just directly write into screen memory like in DOS and earlier times. I've done some software rendering in the past and used a simple stretchdibits to the window DC - that was more than adequate to do simple things in realtime. Though, you could do your tracer on the GPU altogether, which is what many people nowadays are doing.

I'm curious though, what exactly took 400ms? A memcpy of your picture?
When i got the raytracing working again, i will modify my raytracing code a bit and use OpenCL for computing. The thing was, that i had my own "Pixmap" which the final colors where drawn to - and i had to copy this colors (public struct RTColor) to the Bitmap via "MyBitmap.SetPixel(System.Drawing.Color, ..)". This took to long, and i don't wanted to write unsafe code, because i think, if i'm programming in a managed enviroment, one should not use unsafe code until no other solution is left.
So this point and the fact, that i had to do unsafe calls to get OpenCL to work with c# and the speed improvement of a c++ language lead me to this solution - to rewrite the Raytracers Core in c++.

And i don't think, C# is very good for such a type of project - because managing all those vectors, colors and rays by the GC is a (for me) to heavy performance lack (leak? lack?).

And my core is finished, and my first (because simpler) solution was a rtContext which has an abstract rtPixelBuffer class which it will output the final pixels. So, in C#, i call "wrtCreateContext(HDC DeviceContext)" via DllImports which creates a "GdiPixelBuffer" and the result is a valid rtContext with a rtPixelBuffer (GdiPixelBuffer)..
You should really not worry much about this.

Raytrace all you want, when a frame is done, make a buffer object, map it, memcpy the frame into it, and unmap it. Tell OpenGL to draw a textured quad, or do a framebuffer blit if you have version 3.2 available. Start raytracing the next frame, and give a f... about what happens otherwise.

A memcpy worth a full screen of 32-bit pixels takes 1.2 to 1.3 milliseconds on my [slow, old] machine. There is some time that the driver will need to push the data over the PCIe (or AGP or whatever you have) bus, too. Assuming the bus is not just busy, that's another 0.6 or 1.2 milliseconds, depending on what kind of bus you have. So... assuming pretty much the worst, let's say 5 milliseconds. So what. First, your "frame budget" at a typical 60 fps is 16,6 milliseconds (can you raytrace something more complicated than 3 spheres at 60 fps? I will bow to you!) and more importantly, those 5 milliseconds are not visible for you because command submission, data submission, and drawing run asynchronously.

If it takes 5 ms to transfer data, then the current frame will of course be shown on screen 5 ms later, but it will never be shown earlier than at the next sync anyway. And, nothing prevents you from already raytracing the next frame during that time.

With vertical sync enabled, your OpenGL thread will normally block inside SwapBuffers (though many recent implementations defer this to the first draw command after SwapBuffers or even let you pre-render 2-3 frames before blocking) until the frame is on screen. That's a kind of "natural" throttle (and a good thing). Still, nothing prevents you to run all your raytracing in a pool of worker threads (producers) to make use of that time. The OpenGL thread may block, or not, you can still pre-render the next frame already.
samoth: In some cases, you're right - i shouldn't worry about it. But in my case, i want to get experience! I don't even want to look up at those people, who writes librarys like OpenGL or DirectX and think "Theese people must be some kind of gods *_* - no, i want to be one of those people, who know the mechanics behind the scene - just a little bit!

By the way, i got it working. I searched and found, my solution uses CreateDIBSection and BitBlt and other of those basic GDI functions. And, i think, it works good. After programming this i found an equally method in the SDL Source code :)

And now, i can call "wrtCreateContext(MyPanel.Handle);" from C# and my library does all the steps necessary to create a GdiPixelBuffer and a rtContext for drawing :)

This topic is closed to new replies.

Advertisement