How Are Non-GPU Graphics API's Written?

Started by
4 comments, last by hunpro 11 years, 10 months ago
[color=#333333][font=arial, helvetica, clean, sans-serif]

I am a C/C++ and Python programmer and recently have found an interest in computer graphics and software rendering. I have investigated a little past the two standard hardware acceleration API's (DirectX, OpenGL) and found things like Swiftshader that render on the CPU very quickly. I also have long been accustomed to the old "graphics.h" C header, and its possibilities (rather limited, if I may say so myself). How are API's like Swiftshader and the simpler C graphics written to run on the CPU? Would it be written in C or assembler? How can you control pixels at the lowest level and represent them with data types? My apologies if this question has been asked before or is redundant.[/font]

C dominates the world of linear procedural computing, which won't advance. The future lies in MASSIVE parallelism.

Advertisement
It depends on the OS really, with any modern protected-mode OS you can't access the hardware directly, it has to be done through whatever API your graphics drivers provide (on Windows this is basically GDI, DirectX and OpenGL), a software renderer would then use a simple data structure such as an array to represent the framebuffer(s) and manipulate those before finally pushing that array (As a sprite or texture) to the graphicscard using whichever API the OS provides.
[size="1"]I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!

[color=#333333][font=arial, helvetica, clean, sans-serif]

I am a C/C++ and Python programmer and recently have found an interest in computer graphics and software rendering. I have investigated a little past the two standard hardware acceleration API's (DirectX, OpenGL) and found things like Swiftshader that render on the CPU very quickly. I also have long been accustomed to the old "graphics.h" C header, and its possibilities (rather limited, if I may say so myself). How are API's like Swiftshader and the simpler C graphics written to run on the CPU? Would it be written in C or assembler? How can you control pixels at the lowest level and represent them with data types? My apologies if this question has been asked before or is redundant.[/font]



At the lowest level a pixel is just a set of numbers describing a color. The most usual representation is to just store the intensities to use for red, green and blue. A common way to encode this in memory is to use one byte for each, which is usually called RGB or RGB8. A common variation is to also store a fourth value called alpha (typically used as a per-pixel transparency level) on a fourth byte, this is usually referred to as RGBA. (it is more useful than RGB8 because then each pixel is neatly represented as a 32 bit word, which allow for faster access by the CPU).

There are many variations using fewer bits per component to encode rgb values on 16 bits, or not using RGB but completely different color representations, or using fewer channels (a black and white picture needs only one value per pixel representing its intensity)

And so, an image (a bitmap) is simply a 2d array of pixels. When you do software rendering, the actual choices when it comes to the types you use to represent pixels are up to you. You only need to conform to some more specific format when you need to send your pixels to an api or to the hardware, and even then those thing usually can support multiple different pixel formats.
Nowadays you'd have to pass on your framebuffer to a window manger or something similar (SDL,SFML,GL,etc). Even most games back when software rendering was used only used assembler for the most important bits, now you could get plenty of speed in C, or even an interpreted language. I just started programming a software renderer last week, and I simple did this:

u32 *pixels=new u32[width*height];
(u32=unsigned 32bit int)
Each u32 is made up of four bytes (u8 or unsigned char), which I use to represent RGBA.
You can use bit shifting to pack your u8's into a u32:
u32 color=r<<24|g<<16|b<<8|a
You can access a specific color by doing
pixels[x+y*width]=color
I suggest you to take a look at pixeltaster, this lib allows you to create a window, show an array of int in it, where every int represents the color of a pixel.
on top, the lib is cross platform and gives u also easy access to mouse and keyboard input. check out the included samples.

then you can focus on the fun part:rasterization.
DIB (device independent bitmap) is very easy to use, and does not need any external stuff, but windows only.

This topic is closed to new replies.

Advertisement