Sign in to follow this  
xissburg

How to do 'from scratch GPU programming'?

Recommended Posts

I would like to know how I can send commands to the GPU through the CPU(assembly programming probably). Well, it is almost the samething as asking how is Direct3D/OpenGL implemented, how are they programmed? Any articles about? Its just a curiosity, to learn more about the very low level thing. Thanks.

Share this post


Link to post
Share on other sites
Quote:
Original post by xissburg
I would like to know how I can send commands to the GPU through the CPU(assembly programming probably). Well, it is almost the samething as asking how is Direct3D/OpenGL implemented, how are they programmed? Any articles about?

Its just a curiosity, to learn more about the very low level thing. Thanks.


It isn't possible to get direct access to your graphics card.

Even if it were possible you wouldn't want to work at that level. Programming at that level mostly involves direct register writes (which are completely different for each GPU). If you mess up, the GPU will hang, then you restart your computer.

OpenGL and D3D are implemented by your video card driver. These are written by kernel programmers.

If you are curious about this aspect of programming familiarize your self with the principles of operating systems first.

Share this post


Link to post
Share on other sites
Quote:
Original post by xissburg
I would like to know how I can send commands to the GPU through the CPU(assembly programming probably). Well, it is almost the samething as asking how is Direct3D/OpenGL implemented, how are they programmed? Any articles about?

Certainly not. This kind of information is covered by trade secrets, and usually will not be released unless you pay very large amounts of money. AMD/ATI releases some 'dumbed down' register information (ie. none of the cutting edge features) for open source driver development. If you really want to do this (which I definitely do not recommend), you could start looking around the OSS Linux ATI drivers.

Share this post


Link to post
Share on other sites
This is a bad idea for sure. Especially if you know nothing about the way a kernel works. *Start sarcasm* Pick up a book on kernel programming the Minix book comes to mind. You can get it for about $100 which will teach you the basics. Then get your self a linux distro and customize and play around with the kernel. While you are at it grab a ati video card because nvidia does not OSS its drivers. Then start poking around with the FGLRX drivers. When you fry your video card go buy and new one and try again. Eventually you should get the idea. *End Sarcasm*

No really it is a bad idea and not worth the trouble. If you want to get really low to the machine use Assembly and link to DX or GL from their but don't screw with the GPU.

Share this post


Link to post
Share on other sites
Quote:
Original post by blewisjr
When you fry your video card go buy and new one and try again. Eventually you should get the idea.


You cant fry the video card from the kernel. It is possible to do in the firmware however.

Share this post


Link to post
Share on other sites
Quote:
Original post by fpsgamer
You cant fry the video card from the kernel.

Yes you can. You can increase the GPU clock rate beyond its thermal tolerance level. NVidia even lets you do this from the driver panel, if you agree to a length legal agreement which will void all your warranty.

Share this post


Link to post
Share on other sites
Quote:
Original post by blewisjr While you are at it grab a ati video card because nvidia does not OSS its drivers. Then start poking around with the FGLRX drivers. When you fry your video card go buy and new one and try again. Eventually you should get the idea. *End Sarcasm*


As I said above Driver. And yes you can fry it from the kernel and the driver.

Share this post


Link to post
Share on other sites
Quote:
Original post by xissburg
Okay but, how is Direct3D/OpenGL programmed then? How do they do that?


Linux kernel video driver sources. Not the real thing, but about as close as you're likely to get.

As it happens, it's incredibly uninteresting stuff, including mostly magic numbers and debug output.

Then there's basically an API on top of that.

Share this post


Link to post
Share on other sites
I believe they talk directly to the driver. As for how I could not say that is where the trade secrets come in. My best guess is that the programmers of DX and OGL are kernel level programmers who know how to exactly tell the os to send so and so instructions to the driver and the driver passes it all the the GPU. You must remember Microsoft and the makers of OGL work in tandem with Nvidia and ATI.

Share this post


Link to post
Share on other sites
Quote:
Original post by blewisjr
I believe they talk directly to the driver.

They are the driver.

Quote:

You must remember Microsoft and the makers of OGL work in tandem with Nvidia and ATI.

The 'makers of OpenGL' actually work at NVidia or ATI (well, not so much ATI [wink]). Each chipset vendor writes their own drivers, and that includes the OpenGL/D3D parts of it. So obviously they know all the secret GPU protocols - because they own them. From the point of view of a GPU manufacturer, both OpenGL and D3D are just a specification for an interface. They have to implement this interface for their respective chipset, which is done as part of the driver.

You, the application developer, then talk to this interface from the "other side" if you want (well, it's actually routed through an additional thin layer, but conceptually you directly talk to the driver when using OpenGL or D3D).

Share this post


Link to post
Share on other sites
You could do like I did a few months ago...

Pickup a few microcontrollers or microprocessors, preferably 8-bit. Create the additional hardware for one to run on, and learn to program it in assembly. Then develop the crazy idea to build a home-built game console with it. Connect it to an RCA jack and program a GPU in software for the MCU/MPU that rasterizes buffered graphics. Then purchase a FPGA/CPLD and write a simple 3D rasterize in VHDL/verilog. If you haven't fried your TV by now, you have created a graphics card (well basically) and learned to program it at the lowest level.

After all that... direct register access isn't scary, and an API is luxury. Seriously it's a great way to learn to program at the lowest level... such as directly interfacing with a GPU.

Share this post


Link to post
Share on other sites
Quote:
Original post by Craig_jb
You could do like I did a few months ago...

Pickup a few microcontrollers or microprocessors, preferably 8-bit. Create the additional hardware for one to run on, and learn to program it in assembly. Then develop the crazy idea to build a home-built game console with it. Connect it to an RCA jack and program a GPU in software for the MCU/MPU that rasterizes buffered graphics. Then purchase a FPGA/CPLD and write a simple 3D rasterize in VHDL/verilog. If you haven't fried your TV by now, you have created a graphics card (well basically) and learned to program it at the lowest level.

After all that... direct register access isn't scary, and an API is luxury. Seriously it's a great way to learn to program at the lowest level... such as directly interfacing with a GPU.



Rate up for being a total power geek :D

Share this post


Link to post
Share on other sites
Quote:
Original post by blewisjr
Thanks for clearing that up Yann L. So if I get this right all the DX api and OGL api are then would be interfaces to the driver.

With OpenGL, you're directly talking to the driver, yes. For D3D, you're routed through an additional layer, which is rather thin on XP and significantly heavier on Vista (but this doesn't necessarily have performance implications, it's just that MS does a lot more work on 'their' part of the driver on Vista than on XP).

Quote:
Original post by Craig_jb
You could do like I did a few months ago...

You must have a lot of time on your hands... ;)

Share this post


Link to post
Share on other sites
Well from what I've understood about "low-level" graphics programming, the lowest and safest level you can work with is the shader languages: HLSL, GLSL, and Cg. Those are "portable" assembly languages for the video cards (well that's what I've heard them described as a year or two ago).

Share this post


Link to post
Share on other sites
If you really want to, and you have an Nvidia card, install Linux and the Nouveau driver and you can use DRM to map and write directly to the GPU command buffer. Here is an example of such a program.

http://nouveau.cvs.sourceforge.net/nouveau/nv40_demo/main.c?revision=1.8&view=markup

I suggest you don't waste your time writing code at such a low level, unless you're interested in writing drivers.

Share this post


Link to post
Share on other sites
Quote:
Original post by Yann L
Yes you can. You can increase the GPU clock rate beyond its thermal tolerance level. NVidia even lets you do this from the driver panel, if you agree to a length legal agreement which will void all your warranty.

XFX Warranty
Simple, don't use NVidia's overclocking tools [grin]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this