Crossbones+ - Reputation: 810
Posted 01 June 2012 - 10:32 AM
C dominates the world of linear procedural computing, which won't advance. The future lies in MASSIVE parallelism.
Crossbones+ - Reputation: 6111
Posted 01 June 2012 - 10:55 AM
The voices in my head may not be real, but they have some good ideas!
Members - Reputation: 237
Posted 01 June 2012 - 11:18 AM
I am a C/C++ and Python programmer and recently have found an interest in computer graphics and software rendering. I have investigated a little past the two standard hardware acceleration API's (DirectX, OpenGL) and found things like Swiftshader that render on the CPU very quickly. I also have long been accustomed to the old "graphics.h" C header, and its possibilities (rather limited, if I may say so myself). How are API's like Swiftshader and the simpler C graphics written to run on the CPU? Would it be written in C or assembler? How can you control pixels at the lowest level and represent them with data types? My apologies if this question has been asked before or is redundant.
At the lowest level a pixel is just a set of numbers describing a color. The most usual representation is to just store the intensities to use for red, green and blue. A common way to encode this in memory is to use one byte for each, which is usually called RGB or RGB8. A common variation is to also store a fourth value called alpha (typically used as a per-pixel transparency level) on a fourth byte, this is usually referred to as RGBA. (it is more useful than RGB8 because then each pixel is neatly represented as a 32 bit word, which allow for faster access by the CPU).
There are many variations using fewer bits per component to encode rgb values on 16 bits, or not using RGB but completely different color representations, or using fewer channels (a black and white picture needs only one value per pixel representing its intensity)
And so, an image (a bitmap) is simply a 2d array of pixels. When you do software rendering, the actual choices when it comes to the types you use to represent pixels are up to you. You only need to conform to some more specific format when you need to send your pixels to an api or to the hardware, and even then those thing usually can support multiple different pixel formats.
Members - Reputation: 643
Posted 01 June 2012 - 11:39 AM
u32 *pixels=new u32[width*height];
(u32=unsigned 32bit int)
Each u32 is made up of four bytes (u8 or unsigned char), which I use to represent RGBA.
You can use bit shifting to pack your u8's into a u32:
You can access a specific color by doing
Crossbones+ - Reputation: 2572
Posted 02 June 2012 - 11:16 AM
on top, the lib is cross platform and gives u also easy access to mouse and keyboard input. check out the included samples.
then you can focus on the fun part:rasterization.