How can pixel coordinates and RGB values be represented with an 8-bit binary representation

Started by
3 comments, last by Waterlimon 10 years, 4 months ago

I know this is a huge topic, the meat and bones of all h/w and software actually but I want to try and get some kind of visual idea.

So my understanding, you have a monitor with rows and rows of individual pixels, each pixel has a (x,y) coordinate and a RGBA value. You have a front and back buffer that hold the data. And then you have a constant frame after frame updating of the screen which finally gives you the impression of a static animated GUI. Really weird!

So I would have though that after all the coding/compiling/linking/pixel and vertex shading/switching on and off of transistors, back and front buffers sending data repeatedly to the monitors electronics, that after all that what we have to have is about 8 values where each of the RGB values have a corresponding intensity value so RABAGAXY where XY are the coords etc.

So how do you actually tell the monitor the colour certain pixels using only a 0 or 1 binary system. For example lets say you have a monitor with 100/100 rows/columns or pixels, so that would be 100*100 = 10,000 pixels on the screen, and each pixel has it's own (x,y) coordinate. Now I can understand that memory stores 8 bit binary representation in 0's and 1'(i.e voltage on of off across a transistor). So, I still can't where the point in the whole process where the pixel coordinate and RGBA values are set.


Slight diversion here, if I had a paper grid of 100*100 pixel and a brush and black paint, how would you tell me via the 8-bit binary system which pixel coordinates to paint with me brush. I could understand if there was say a data base stored in the GPU where it could look up predefined values, but I can't see how you go from binary to RGB and alpha and coordinate values.

As I said this is the whole topic of h/w and software I just can't see how an 8bit binary number on it's own can specify coordinates colours and colour values etc..

Advertisement

Pixels are small pixies with paint brushes, which read the values in the memory and paint them using pixie magic.

More likely: it uses a digital to analogue convertor to change binary values to voltages which are then sent to the LCD R, G and B channels (or the raster beam in a CRT monitor) which corresponds to a brightness value for the channel. Alpha doesn't exist on the actual display, it's just used by the GPU to blend colour values.

"Most people think, great God will come from the sky, take away everything, and make everybody feel high" - Bob Marley

The pixel coordinates are not stored anywhere. They're implicit from the location inside the memory buffer where the rgba value was read. So, for 3 bytes per pixel (1 byte each for rgb) and with a screen that is 100x100 pixels, then memory offset 312 would correspond to pixel (3,4). Y = offset / rowWidth. X = offset % rowWidth. Your row width is 300 (3 bytes per pixel, 100 pixels per row).

Not sure i understand the question perfectly, but take a look at this. This is the visual representation of a single RGBA pixel. Nowadays, computer (and programmers) love working with byte, which is a group of 8 bits, because let's face it, you can't do much with only 0 and 1. So, each color, red, green and blue take 1 byte, or 8 bits, which mean it can have any value between 0 and 255. But computer language also define integers, which are typically 4 bytes long, or 32 bits, thus giving a possible value between 0 and 4294967295 (or -2147483648 to 2147483647) depending if the value is signed or not. That way we can use value bigger than 255, like the x and y coordinate you where talking about.

Now, let say you're screensize is 1360x768 with 32 bit colors, that mean it take 1360x768x4 bytes to store that information, that's 4177920 bytes. All that information is generally stored in a linear array, one byte after the other. Let say you want to allocate a buffer for a 24 bits image that is 1000x1000 pixels large, in c++, you would do something like that:


// create a structure to hold 1 pixel data
struct RGB {
    BYTE r, g, b;
};

// some usefull constants
int Width = 1000;
int Height = 1000;
int NumPixels = Width * Height;

// Allocate a buffer large enough to store pixels into
RGB *pBuffer = new RGB[NumPixels];

// Fill them with random values
for(int i = 0; i < NumPixels; i++){
    pBuffer[i].r = rand() % 256;     // this assign a value between 0 and 255 to each byte in the image
    pBuffer[i].g = rand() % 256;     // that's 16777216 possible colors per pixels (256*256*256)
    pBuffer[i].b = rand() % 256;
}

// now, let say we want to access the pixel 100,50 (x=100, y=50) and set it completly red ...
int x = 50;
int y = 100;
int Indx = (x * Width) + y; // basic formula for transforming a 2d coordinate point into a 1d coordinate (linear)

pBuffer[Indx].r = 255;
pBuffer[Indx].g = 0;
pBuffer[Indx].b = 0;



Hope that help!

EDIT: other good read: link, link, link

First, the pixel itself, contains 8 bytes (lets ignore alpha), each with 8 bits.

Each bit is like a digit in our base 10 number system, addition and all work the same (eg. if you increment a digit by 1 and it goes over the "limit", you set it to 0 and increment the next digit)

eg.

binary:

001: + 001

decimal:

001 + 009

VVVVVVVVV

binary:

010

decimal:

010

So basically you have a structure like (0-255,0-255,0-255), each value telling the amount of that color. Using combinations of these colors the monitor can create any color for the single pixel.

Now, next the position.

On the GPU, we have as a result of all the processing a huge array of the above mentioned RGB colors, next to each other in memory. The size of the array is width*height of the screen, to have the correct amount of pixels.

You could imagine the drawing process as going through the pixels one by one, and writing it to the corresponding pixel on the screen.

For example, you could imagine that the screen also contains a width*height array of pixels, each of which is hard wired to the physical pixel on the screen. Then it would just be a matter of copying the pixel array from the GPU to the screens memory. Probably not how it actually works but it could be one way.

If you want to know how, on the transistor level, it is possible to index some byte in the memory given another byte representing the position, it could be implemented as a hierarchy.

If you have a tree where each node splits to 2 subnodes, and have 8 levels, this gives you 256 units of memory (bottommost nodes). To access this using a byte (8 bits) you would take the first bit to decide which subnode to go to from the root node, then use the second bit to decide where to turn next etc. until you reach the bottom level.

o3o

This topic is closed to new replies.

Advertisement