Opengl design

Started by
22 comments, last by 21st Century Moose 10 years ago


I'm not quit sure what software rendering is

A software renderer is any render which is implemented in software instead of by specialized hardware, such as a GPU.

Advertisement


I'm not quit sure what software rendering is, Processes that are implemented through the cpu?

Software rendering runs on the CPU, hardware rendering runs on the GPU.

If your 3D experience is limited only to games with fast frame rates and soft-realtime requirements, then it might be reasonable to only think about hardware implementations. But if your 3D viewpoint includes offline processing, such as rendering that takes place in movies, print, other physical media, or scientific rendering, software rendering is a pretty good thing.

Sorry I am new to this, what are soft-realtime requirements? And I think hardware implementations are processes that go through the gpu? I'm not quit sure what software rendering is, Processes that are implemented through the cpu?

A realtime requirement is that the software must complete the task within a certain amount of time.

There are soft and hard requirements.

Some examples are probably in order.

Imagine the machine that puts the caps on glass beverage bottles. The machine is part of an assembly line and the bottles flow through rapidly. If the machine stamps the cap at the wrong time the results are an error. It might form a bad seal or even break the bottle. There is a very specific time window for the task. If there is a problem the result is catastrophic -- the glass bottle is unusable. This is called a hard realtime requirement.

Next, video games. Let's say the game is running on a commercial game console attached to a television. The game's screen is running at 60Hz. If the game takes too long to display a screen the results are not smooth and are considered an error. Each frame must be completed within the time window. Unlike the glass bottles, if the time constraint is not met the result is an annoyance but not catastrophic. This is called a soft realtime requirement.

As for the differences between software rendering and hardware rendering, the difference is where the work takes place. Simply put, the work of rendering takes place on the CPU instead of dedicated GPU hardware.

Here comes the history lesson.

Because it relevant for time lines, note that the hardware developments leading to 3D graphics are fairly recent. Integer division was frequently done in software for most of computing history. In the early '80s it was common to have a co-processor for division since many chips didn't support it. The x86 family included a dedicated integer divider, which contributed to the popularity as business machines. In the mid-1980s programs started relying on floating point math, which was slow but gave better results than fixed point math for many business uses. The result was the x87 co-processor for floating point math, which many businesses paid a premium for.

It wasn't until 1990 that dedicated floating point hardware penetrated the home computer market, and dedicated floating point hardware was pretty rare. Few major games could rely on hardware floating point being present, and it wasn't until around 1993 or so that mainstream games started to require 486DX processors that had the dedicated floating point.

Even the Nintendo DS which launched in 2004 did not have floating point hardware and also relied on a dedicated co-processor for integer division.

Before 3D graphics cards became common around 2000-2002 era, 3D programs would do all the math in software, occasionally taking advantage of dedicated math co-processors. They would compute the results as a large 2D image, and then display the image.

Before 1995 or so, everything was done in software. The results were usually computed with relatively slow software floating point and relatively slow main memory, which leads to today's common belief that software rendering is too slow to be useful. While many people remember them as slow, note that they were doing software-based floating point on sub-25MHz machines (rather than multi-core multi-GHz machines) and memory speed was several hundred nanoseconds (rather than the 3.75ns in today's newer machines).

Since many systems had 2D graphics acceleration in the mid 1990s there were many games that would do all the math on the fancy new dedicated floating point processors to transform the polygons and then use line drawing or polygon drawing functions to render everything. It wasn't as pretty, but wireframe 3D graphics still provided great games in the early '90s.

The first few consumer-level 3D cards appeared mid-1995. (There were very expensive cards before that used by scientific simulations and specialized CAD software.) They provided dedicated hardware for matrix math. Instead of doing all the math in the main processor, specialized hardware could perform the matrix math using specialized floating point processors in just a few cycles. These devices usually also provided high-speed memory often used for rendering. Many of these devices also let you do all the rendering in place so you didn't need to copy the rendered image over to video memory, either replacing or supplementing existing graphics cards.

The next few rounds introduced hardware texturing and lighting. You could store textures on the card instead of main memory. When you needed to draw a triangle it would automatically copy, scale, and shear the texture as necessary for the triangle. The hardware lighting meant you could apply light levels to the triangle corners and it would lighten or darken the texture as needed.

To give you a feel for the timeline, hardware accelerated texturing and lighting first appeared in DirectX 7. Before that Direct3D and OpenGL drivers could take advantage of the matrix math co-processors and specialized memory, but it still did quite a lot of the heavy work in software.

Today the vast majority of rendering work takes place on dedicated hardware. We can upload the textures, upload the meshes, upload the transformations, and upload compute-intensive scripts to the card. With all of them in place, we issue instructions to the card and it does all the heavy work on its own processors rather than the main CPU.

History lesson over. Time to wake up.

Software rendering just means the CPU does the work of turning point clouds and textures into a beautiful picture, rather than relying on dedicated hardware to do the job instead.

It should be added here:

In the opengl wiki it say: "Although it is possible for the API to be implemented entirely in software, it is designed to be implemented mostly or entirely in hardware." Can anyone expand on that? How would it be implemented entirely in software, and how is that different than on hardware?

We're not just talking about "software rendering", we're talking about software implementations of OpenGL. The two things are not the same: it's possible to have a software renderer that looks and acts absolutely nothing like OpenGL. For the OP's benefit, all of the old software renderers from older (1990s and earlier) games were of this class. They were custom renderers written specifically for a game engine and highly tuned to run well on then-current CPUs. Much of the above discussion relates to this kind of software renderer, not to a software implementation of OpenGL.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Hey thanks a lot guys!

How many flops are you guys getting? You are doing floating point ops in software and hardware?

I be getting over 9000 flops yo.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

If you want to do anything serious, forget about software right now.

This is very true, and I'm sorry I was not able to take part in this discussion earlier, since I think it went in wrong direction.

Rendering time does matter! It matters a lot, so I have to disagree with most "facts" frob used to illustrate his opinion.

But if your 3D viewpoint includes offline processing, such as rendering that takes place in movies, print, other physical media, or scientific rendering, software rendering is a pretty good thing.

It is maybe good, but HW accelerated is better. With legacy OpenGL it was really necessary to implement algorithms on the CPU side in order to have ray tracing and similar stuff. But now it is not. And if we have several order of magnitude acceleration through GPU usage, I simply don't understand why anybody would defend slower solutions.

There are some cases when CPU can beat GPU in rendering. That is a case when cache coherence is very weak, when different technologies compete for the resources and communicate through high number of small buffers that have to be synchronized. In most cases, beating GPUs like GK110 (with 2880 cores, 6x64-bit memory controllers, GDDR5 memory) in graphics stuff (where parallelization can be massive) is almost impossible. And we are taking about orders of magnitude!

Think about the resolution we get out of modern graphics cards.

Monitors with DVI can get up to about 1920x1200 resolution. That's about 2 megapixels. Most 4k screens get up to 8 megapixels. Compare it with photographers who complain about not being able to blow up their 24 megapixel images. In the physical film world, both 70mm and wide 110 are still popular when you are blowing things up to wall-size, either in print or in movies. The first is about 58 megapixel equivalent, the second about 72 megapixel equivalent.

When you see an IMAX 3D movie, I can guarantee you they were not worried about how quickly their little video cards could max out on fill rate. They use an offline process that generates large high quality images very slowly.

What does the resolution matter? This is a very inappropriate example.

IF GPU can render a 2M scene in 16ms, 72M scene can be rendered in 576ms. That's only 0.6s.

Using CPU implementation (what we call "software") it would take almost a minute.

Of course, it depends on the underlaying hardware.

In the film industry, I bet it is not irrelevant if some post-production lasts several days or several months.

There are a lot GPU accelerated renderers for professional 3D applications, although they are using CUDA (probably because it was easier to port it to CUDA than to OpenGL, and because there is lack of precision control and relatively new tessellation and computation support in OpenGL).

There are companies today that sell even faster software rendering middleware for running modern games with real-time speeds for systems with under-featured graphics cards that cannot run modern shaders .

Can you give some useful link? How can CPU be even near the speed of GPU and also do some other tasks (AI, game logic, resource handling, etc.). This is SF, or far slower than it should be to be useful. Be honest, who will buy 3D video cards if games can be played smoothly on the CPU only?

You can definitely play games like Quake 1 - 3 on a software only renderer. With things like AVX2, DDR4 and cpus with 8+ cores, the performance disparity between software rendering and "hardware" rendering decreases significantly.

I really doubt in it. Any useful link to support this claim?


I really doubt in it. Any useful link to support this claim?
Here is an overview of the Quake 2 software renderer:

http://fabiensanglard.net/quake2/quake2_software_renderer.php

IIRC, it was the main renderer people used when it came out, there weren't many people with hardware accelerated cards (we're talking of people running Quake 2 rendered with the first Pentium CPU, not 8 core number crunching monsters) All previous Id games also used software renderers (Quake 1, Doom, Wolfenstein 3D). I'm not sure if Quake 3 had one.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator


I really doubt in it. Any useful link to support this claim?
Here is an overview of the Quake 2 software renderer:

http://fabiensanglard.net/quake2/quake2_software_renderer.php

IIRC, it was the main renderer people used when it came out, there weren't many people with hardware accelerated cards (we're talking of people running Quake 2 rendered with the first Pentium CPU, not 8 core number crunching monsters) All previous Id games also used software renderers (Quake 1, Doom, Wolfenstein 3D). I'm not sure if Quake 3 had one.

The Quake 2 software renderer was not a software implementation of OpenGL, which was what the OP was asking about.

At this stage I really really regret even mentioning the word "Quake" here as my doing so seems to have steered this thread down a completely irrelevant path. What I meant was a Quake-like level of scene complexity, as in low-polycount, low-resolution textures, low screen resolution (maxing out at perhaps 640x480 or 800x600), no complex effects, etc.

And of course offline rendering still uses software, but again this is completely irrelevant. We're gamedev.net so we're talking about realtime rendering in a game engine using consumer-level hardware, unless explicitly stated otherwise.

So taking these two together, that's the kind of scenario where a software OpenGL implementation will get you below 1fps even with the low level of scene complexity I mention. Yes, even on a modern CPU. Anybody want a citation? OK, see for example this thread where the OP in it got 0.5fps owing to using an OpenGL feature that was supported by the driver but not in hardware.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

This topic is closed to new replies.

Advertisement