Opengl design

Started by
22 comments, last by 21st Century Moose 10 years ago

In the opengl wiki it say: "Although it is possible for the API to be implemented entirely in software, it is designed to be implemented mostly or entirely in hardware." Can anyone expand on that? How would it be implemented entirely in software, and how is that different than on hardware?

wiki design:

http://en.wikipedia.org/wiki/OpenGL

Advertisement


How would it be implemented entirely in software, and how is that different than on hardware?

The front-end is still OpenGL's API, but the back-end uses another API, usually a 2D API natively supported on the target platform. If done so, then the in-between steps normally run on the GPU need to be emulated, e.g. the transform, projection, rasterization (to name just the basic ones). Blending is probably supported by the 2D API already. However, in general one has to implement all steps so that pixel information is ready to use.

Perhaps the best known software renderer is in Mesa 3D. You may look into its source code if you're really interested in.

With a hardware implementation your program issues OpenGL commands. Your driver takes these commands and converts them to something your graphics card can understand. Your graphics card does all the work (drawing/etc).

With a software implementation all of the work is done in software instead. So vertex setup, transformation, clipping, rasterization, fragment shading and blending are all performed in software to build a final image which is then written to your display.

This is a bit simplified. Many times OpenGL will perform some work in hardware and some in software, with the intention of balancing the work between the two processors (CPU and GPU) selecting which is best for each task (and depending on what your graphics card is able to do).

If you're interested in exploring a software implementation be aware that they are slow. I don't mean they're half the speed, or quarter, or even one-tenth. They're play-Quake-at-less-than-one-frame-per-second slow. This is OK if all you ever write is trivial tech demos. If you want to do anything serious, forget about software right now.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Mesa3D is a fully compliant OpenGL implementation that includes a full software renderer.

If you're interested in exploring a software implementation be aware that they are slow. I don't mean they're half the speed, or quarter, or even one-tenth. They're play-Quake-at-less-than-one-frame-per-second slow. This is OK if all you ever write is trivial tech demos. If you want to do anything serious, forget about software right now.

I strongly disagree. I would call 3D rendering for film to be quite serious. I would also call 3D rendering for printed material to be quite serious.

OpenGL is not just about games. OpenGL is about rendering generally. That might mean rendering for a 320x240 cell phone. That might mean rendering for a 1080p television display. That might mean rendering for a massive scientific dataset at 43200x28800 resolution and even much more.

If your 3D experience is limited only to games with fast frame rates and soft-realtime requirements, then it might be reasonable to only think about hardware implementations. But if your 3D viewpoint includes offline processing, such as rendering that takes place in movies, print, other physical media, or scientific rendering, software rendering is a pretty good thing.

Think about the resolution we get out of modern graphics cards.

Monitors with DVI can get up to about 1920x1200 resolution. That's about 2 megapixels. Most 4k screens get up to 8 megapixels. Compare it with photographers who complain about not being able to blow up their 24 megapixel images. In the physical film world, both 70mm and wide 110 are still popular when you are blowing things up to wall-size, either in print or in movies. The first is about 58 megapixel equivalent, the second about 72 megapixel equivalent.

When you see an IMAX 3D movie, I can guarantee you they were not worried about how quickly their little video cards could max out on fill rate. They use an offline process that generates large high quality images very slowly.

OpenGL is first and foremost a rendering API. It does not specify output media, nor does it specify mandatory resolutions. Games might be a common use, but they are not the only use.

The rendering API allows you to use any resolution you want, and allows an implementation to output the image to whatever media it wants, including saving them to disk. All that matters is that rendering happens.

Let's say you are working with scientific computing rather than games. And lets say your scientific image needs to be frequently referenced, so you decide to print it in high resolution and mount it on a wall. You are in America where they still use inches, so you select a relatively common professional poster size 72 inch x 48 inch print, specifying a 600dpi resolution for a high quality image. A bit of math means you need to render to a 43200 x 28800 pixel image for your scientific data set.

When you configure your somewhat unconventional display size, you will not be concerned about fill rate or frames per second. Also, you will want to set your rendering hints to favor quality, not performance.

The OpenGL specification is based around operations and results. It specifies operations, not nanoseconds.

Mesa3D is a fully compliant OpenGL implementation that includes a full software renderer.

If you're interested in exploring a software implementation be aware that they are slow. I don't mean they're half the speed, or quarter, or even one-tenth. They're play-Quake-at-less-than-one-frame-per-second slow. This is OK if all you ever write is trivial tech demos. If you want to do anything serious, forget about software right now.

I strongly disagree. I would call 3D rendering for film to be quite serious. I would also call 3D rendering for printed material to be quite serious.

OpenGL is not just about games.

True, but the site is gamedev.net so I'm assuming a more limited scope: i.e comparison of hardware-accelerated OpenGL via typical consumer-level 3D cards vs typically encountered software implementations (Mesa and Microsoft's).

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

They're play-Quake-at-less-than-one-frame-per-second slow.


Nonsense. Quake _was_ software rendered on CPUs more than a decade old. GLQuake is a separate thing that a lot of people at the time could not run as they had no 3D hardware.

Naive, graphics 101 software renderers are often slideshows even on simple scenes, sure. Modern multi-threaded, SIMD-using, cache-aware, JIT-compiled (for shaders, interpolation, conversions, etc.) software renderers are quite speedy. Both Mesa and DirectX ship with them, in fact (LLVMpipe and WARP, respectively). There are companies today that sell even faster software rendering middleware for running modern games with real-time speeds for systems with under-featured graphics cards that cannot run modern shaders (certainly not Arkham City on Ultra settings at 4K, but still). A number of games that fully saturate a high-end GPU for rendering the screen still do some software rasterization for things like occlusion queries, even.

Sean Middleditch – Game Systems Engineer – Join my team!

You can definitely play games like Quake 1 - 3 on a software only renderer. With things like AVX2, DDR4 and cpus with 8+ cores, the performance disparity between software rendering and "hardware" rendering decreases significantly.

Maybe I should have said "play a game with 16 year old graphics at less than 1 fps" instead then, eh?

The point is: "Quake" wasn't meant to be taken literally, and it's a shame that it was because doing so totally detracts from the point being made here, which is that the common software implementations are slower than is practical for use.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

I was expecting someone to point out that both "hardware rendering" and "software rendering" are run on hardware. It's just that first one runs on the GPU and second one on the CPU. That's all there is to it.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

If your 3D experience is limited only to games with fast frame rates and soft-realtime requirements, then it might be reasonable to only think about hardware implementations. But if your 3D viewpoint includes offline processing, such as rendering that takes place in movies, print, other physical media, or scientific rendering, software rendering is a pretty good thing.

Sorry I am new to this, what are soft-realtime requirements? And I think hardware implementations are processes that go through the gpu? I'm not quit sure what software rendering is, Processes that are implemented through the cpu?

This topic is closed to new replies.

Advertisement