What can software rasterizers be used for today?

Started by
18 comments, last by ZachBethel 11 years, 4 months ago
As mentioned, software rasterizers are still of some practical use to help aid the GPU, as dice does -- in general, any time you might get away with a lower-resolution stand-in with limited or no "pixel shading" -- so things like occlusion, perhaps shadow maps generation or other lighting effects could be done.

The very best software rasterizers of today are incredible pieces of technology that scale across CPU cores and across vector instruction sets, and just-in-time-compiling their own pixel shaders to SSE/AVX -- Still, once the pixel shading and texture sampling start to get cranked up, even several CPUs can barely keep pace with even entry-level GPUs it just doesn't have the compute throughput necessary to drive resolution much beyond 1024x768 or so, which is to say nothing of the meager memory and cache bandwidth a typical PC has compared to a GPU.

Still a very interesting exercise though -- I'm debating pulling out my old single-threaded, mostly-non-vectorized rasterizer (which was still decently fast) and seeing how far I can push it with 4 cores and AVX.

throw table_exception("(? ???)? ? ???");

Advertisement
I believe high end renderers such as RenderMan are implemented in software and go across multiple computers but I do believe they can utilize the GPU. I wouldn't think they use OpenGL or DirectX directly. I would think they would use it similar as a gpgpu. But for a real-time renderer, you will definitely need to use the GPU.


Yes, that's a distinction to be made for sure, but RenderMan is fundamentally a ray-tracer, and the final renderings take minutes or hours per frame on large clusters of computers. More and more of that can move onto the GPU as technology advances, but the very wide vector machines that GPUs are aren't a great match for ray-casters, because its very, very hard to keep all the rays moving in the same direction, and in the same state. There are quick-turnaround previewing tools that can run scenes on a GPU in real-time, but they don't have anywhere near the subtle lighting that they get with the final rendering.

I think the OP is mostly talking about a real-time rasterizer

throw table_exception("(? ???)? ? ???");


I believe Blender still renders on the CPU, or at least did, they had 'reasons' but I forgot what they were


Blender has a new raytracer called Cycles which can advantage of CUDA and OpenCL, can have custom shaders and so on.

Another thing I'm wondering, I've been thinking about experimenting with some rendering techniques like rasterization with gpgpu by using something like CUDA. However, I have no CUDA experience and wondering if it would be possible to do so? Could there be certain advantages over just using Dx / GL?


These guys did.
How does one even start to code a software renderer?
We use a very simplified software rasterizer for voxelizing triangle-meshes.

We set up an orthogonal projection of the model and rasterize it into a simple A-buffer data-structure. (A-buffers are framebuffers with per-pixel list of fragment depths)

After we did that we can just sort those lists, move through them from front and back and set ranges between fragments to either inside or outside in the voxel volume.
Haha, like I said 'How does one even start to code a software renderer?' laugh.png

Way over my head blink.png

How does one even start to code a software renderer?


All you need is a chunk of memory to call your framebuffer, and a way to get it onto your screen. In GDI/Win32, SetDIBBits or StretchDIBBits can get your framebuffer onto a window. Unfortunately win32 doesn't provide hooks for syncing to the vertical retrace, so you'll see tearing in the results (which is mostly not an issue).

Then you just need a to write pixels into your framebuffer which are in a compatible format -- the usual suspects apply: RGBA8888, RGB565, etc.

throw table_exception("(? ???)? ? ???");


Haha, like I said 'How does one even start to code a software renderer?' laugh.png

Way over my head blink.png


A good way to start is to try and understand how the GPU processes things. That means learning how index/vertex buffers work, how to write shaders, how world/view/projection space works and the math behind that, how the z-buffer works, how alpha blending works, how mip-mapping works, clipping, etc. Once you understand those concepts, you can start to write a rasterizer (alternatively, learning while you write one is a great way to learn it!). Tackling the entire thing all at once is completely overwhelming, but breaking it down into small parts takes care of that. For instance, you could start by writing a simple wireframe rasterizer. That was what I started with. If you're not looking write a super fast parallelized one then this is actually quite easy.

This topic is closed to new replies.

Advertisement