If your 3D experience is limited only to games with fast frame rates and soft-realtime requirements, then it might be reasonable to only think about hardware implementations. But if your 3D viewpoint includes offline processing, such as rendering that takes place in movies, print, other physical media, or scientific rendering, software rendering is a pretty good thing.
Sorry I am new to this, what are soft-realtime requirements? And I think hardware implementations are processes that go through the gpu? I'm not quit sure what software rendering is, Processes that are implemented through the cpu?
A realtime requirement is that the software must complete the task within a certain amount of time.
There are soft and hard requirements.
Some examples are probably in order.
Imagine the machine that puts the caps on glass beverage bottles. The machine is part of an assembly line and the bottles flow through rapidly. If the machine stamps the cap at the wrong time the results are an error. It might form a bad seal or even break the bottle. There is a very specific time window for the task. If there is a problem the result is catastrophic -- the glass bottle is unusable. This is called a hard realtime requirement.
Next, video games. Let's say the game is running on a commercial game console attached to a television. The game's screen is running at 60Hz. If the game takes too long to display a screen the results are not smooth and are considered an error. Each frame must be completed within the time window. Unlike the glass bottles, if the time constraint is not met the result is an annoyance but not catastrophic. This is called a soft realtime requirement.
As for the differences between software rendering and hardware rendering, the difference is where the work takes place. Simply put, the work of rendering takes place on the CPU instead of dedicated GPU hardware.
Here comes the history lesson.
Because it relevant for time lines, note that the hardware developments leading to 3D graphics are fairly recent. Integer division was frequently done in software for most of computing history. In the early '80s it was common to have a co-processor for division since many chips didn't support it. The x86 family included a dedicated integer divider, which contributed to the popularity as business machines. In the mid-1980s programs started relying on floating point math, which was slow but gave better results than fixed point math for many business uses. The result was the x87 co-processor for floating point math, which many businesses paid a premium for.
It wasn't until 1990 that dedicated floating point hardware penetrated the home computer market, and dedicated floating point hardware was pretty rare. Few major games could rely on hardware floating point being present, and it wasn't until around 1993 or so that mainstream games started to require 486DX processors that had the dedicated floating point.
Even the Nintendo DS which launched in 2004 did not have floating point hardware and also relied on a dedicated co-processor for integer division.
Before 3D graphics cards became common around 2000-2002 era, 3D programs would do all the math in software, occasionally taking advantage of dedicated math co-processors. They would compute the results as a large 2D image, and then display the image.
Before 1995 or so, everything was done in software. The results were usually computed with relatively slow software floating point and relatively slow main memory, which leads to today's common belief that software rendering is too slow to be useful. While many people remember them as slow, note that they were doing software-based floating point on sub-25MHz machines (rather than multi-core multi-GHz machines) and memory speed was several hundred nanoseconds (rather than the 3.75ns in today's newer machines).
Since many systems had 2D graphics acceleration in the mid 1990s there were many games that would do all the math on the fancy new dedicated floating point processors to transform the polygons and then use line drawing or polygon drawing functions to render everything. It wasn't as pretty, but wireframe 3D graphics still provided great games in the early '90s.
The first few consumer-level 3D cards appeared mid-1995. (There were very expensive cards before that used by scientific simulations and specialized CAD software.) They provided dedicated hardware for matrix math. Instead of doing all the math in the main processor, specialized hardware could perform the matrix math using specialized floating point processors in just a few cycles. These devices usually also provided high-speed memory often used for rendering. Many of these devices also let you do all the rendering in place so you didn't need to copy the rendered image over to video memory, either replacing or supplementing existing graphics cards.
The next few rounds introduced hardware texturing and lighting. You could store textures on the card instead of main memory. When you needed to draw a triangle it would automatically copy, scale, and shear the texture as necessary for the triangle. The hardware lighting meant you could apply light levels to the triangle corners and it would lighten or darken the texture as needed.
To give you a feel for the timeline, hardware accelerated texturing and lighting first appeared in DirectX 7. Before that Direct3D and OpenGL drivers could take advantage of the matrix math co-processors and specialized memory, but it still did quite a lot of the heavy work in software.
Today the vast majority of rendering work takes place on dedicated hardware. We can upload the textures, upload the meshes, upload the transformations, and upload compute-intensive scripts to the card. With all of them in place, we issue instructions to the card and it does all the heavy work on its own processors rather than the main CPU.
History lesson over. Time to wake up.
Software rendering just means the CPU does the work of turning point clouds and textures into a beautiful picture, rather than relying on dedicated hardware to do the job instead.