Jump to content

  • Log In with Google      Sign In   
  • Create Account

How Are Non-GPU Graphics API's Written?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
5 replies to this topic

#1 Toothpix   Crossbones+   -  Reputation: 810

Like
0Likes
Like

Posted 01 June 2012 - 10:32 AM

I am a C/C++ and Python programmer and recently have found an interest in computer graphics and software rendering. I have investigated a little past the two standard hardware acceleration API's (DirectX, OpenGL) and found things like Swiftshader that render on the CPU very quickly. I also have long been accustomed to the old "graphics.h" C header, and its possibilities (rather limited, if I may say so myself). How are API's like Swiftshader and the simpler C graphics written to run on the CPU? Would it be written in C or assembler? How can you control pixels at the lowest level and represent them with data types? My apologies if this question has been asked before or is redundant.

C dominates the world of linear procedural computing, which won't advance. The future lies in MASSIVE parallelism.


Sponsor:

#2 SimonForsman   Crossbones+   -  Reputation: 6294

Like
0Likes
Like

Posted 01 June 2012 - 10:55 AM

It depends on the OS really, with any modern protected-mode OS you can't access the hardware directly, it has to be done through whatever API your graphics drivers provide (on Windows this is basically GDI, DirectX and OpenGL), a software renderer would then use a simple data structure such as an array to represent the framebuffer(s) and manipulate those before finally pushing that array (As a sprite or texture) to the graphicscard using whichever API the OS provides.
I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!

#3 Zlodo   Members   -  Reputation: 246

Like
0Likes
Like

Posted 01 June 2012 - 11:18 AM

I am a C/C++ and Python programmer and recently have found an interest in computer graphics and software rendering. I have investigated a little past the two standard hardware acceleration API's (DirectX, OpenGL) and found things like Swiftshader that render on the CPU very quickly. I also have long been accustomed to the old "graphics.h" C header, and its possibilities (rather limited, if I may say so myself). How are API's like Swiftshader and the simpler C graphics written to run on the CPU? Would it be written in C or assembler? How can you control pixels at the lowest level and represent them with data types? My apologies if this question has been asked before or is redundant.


At the lowest level a pixel is just a set of numbers describing a color. The most usual representation is to just store the intensities to use for red, green and blue. A common way to encode this in memory is to use one byte for each, which is usually called RGB or RGB8. A common variation is to also store a fourth value called alpha (typically used as a per-pixel transparency level) on a fourth byte, this is usually referred to as RGBA. (it is more useful than RGB8 because then each pixel is neatly represented as a 32 bit word, which allow for faster access by the CPU).

There are many variations using fewer bits per component to encode rgb values on 16 bits, or not using RGB but completely different color representations, or using fewer channels (a black and white picture needs only one value per pixel representing its intensity)

And so, an image (a bitmap) is simply a 2d array of pixels. When you do software rendering, the actual choices when it comes to the types you use to represent pixels are up to you. You only need to conform to some more specific format when you need to send your pixels to an api or to the hardware, and even then those thing usually can support multiple different pixel formats.

#4 zacaj   Members   -  Reputation: 643

Like
1Likes
Like

Posted 01 June 2012 - 11:39 AM

Nowadays you'd have to pass on your framebuffer to a window manger or something similar (SDL,SFML,GL,etc). Even most games back when software rendering was used only used assembler for the most important bits, now you could get plenty of speed in C, or even an interpreted language. I just started programming a software renderer last week, and I simple did this:

u32 *pixels=new u32[width*height];
(u32=unsigned 32bit int)
Each u32 is made up of four bytes (u8 or unsigned char), which I use to represent RGBA.
You can use bit shifting to pack your u8's into a u32:
u32 color=r<<24|g<<16|b<<8|a
You can access a specific color by doing
pixels[x+y*width]=color

#5 Krypt0n   Crossbones+   -  Reputation: 2656

Like
0Likes
Like

Posted 02 June 2012 - 11:16 AM

I suggest you to take a look at pixeltaster, this lib allows you to create a window, show an array of int in it, where every int represents the color of a pixel.
on top, the lib is cross platform and gives u also easy access to mouse and keyboard input. check out the included samples.

then you can focus on the fun part:rasterization.

#6 hunpro   Members   -  Reputation: 900

Like
0Likes
Like

Posted 03 June 2012 - 05:50 PM

DIB (device independent bitmap) is very easy to use, and does not need any external stuff, but windows only.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS