I know this is a huge topic, the meat and bones of all h/w and software actually but I want to try and get some kind of visual idea.
So my understanding, you have a monitor with rows and rows of individual pixels, each pixel has a (x,y) coordinate and a RGBA value. You have a front and back buffer that hold the data. And then you have a constant frame after frame updating of the screen which finally gives you the impression of a static animated GUI. Really weird!
So I would have though that after all the coding/compiling/linking/pixel and vertex shading/switching on and off of transistors, back and front buffers sending data repeatedly to the monitors electronics, that after all that what we have to have is about 8 values where each of the RGB values have a corresponding intensity value so RABAGAXY where XY are the coords etc.
So how do you actually tell the monitor the colour certain pixels using only a 0 or 1 binary system. For example lets say you have a monitor with 100/100 rows/columns or pixels, so that would be 100*100 = 10,000 pixels on the screen, and each pixel has it's own (x,y) coordinate. Now I can understand that memory stores 8 bit binary representation in 0's and 1'(i.e voltage on of off across a transistor). So, I still can't where the point in the whole process where the pixel coordinate and RGBA values are set.
Slight diversion here, if I had a paper grid of 100*100 pixel and a brush and black paint, how would you tell me via the 8-bit binary system which pixel coordinates to paint with me brush. I could understand if there was say a data base stored in the GPU where it could look up predefined values, but I can't see how you go from binary to RGB and alpha and coordinate values.
As I said this is the whole topic of h/w and software I just can't see how an 8bit binary number on it's own can specify coordinates colours and colour values etc..