Accessing video memory in c++

Started by
33 comments, last by phresnel 14 years, 8 months ago
I like to do char *myScreen = new char[height*width*3], and draw to that. Then you can copy that to your window however you like, it can easily be implemented separate from the rest of your game in all the mentioned APIs. The easiest probably being StretchDIBits in Win32, which with a single function call puts those pixels in the window.
Advertisement
I hate to sound asinine, but maybe you would have better luck accessing video memory directly if you used a hacked-up version of the Linux kernel. I think the linear frame buffer is still there on x86 machines at 0x0000a0000. It's also easier to write a device driver under Linux than Windows.

Short of that the only way your going to access the video memory directly is if you write your own operating system and/or your own video driver. Like one of the poster's above mentioned if you *DID* write your own video driver, the API to it would be your replacement to D3D (except for the fact that it would only work with one specific type of hardware, whereas D3D is generic enough to interface with 99.9% of existing video cards).

Or you could just roll your own mini-os that has no purpose except to access video memory. Another alternative would be to create a Windows program that was compiled using the Boot Environment Application subsystem, that gives pretty raw access to the video buffer (through Bg/Bl APIs) but is largely undocumented. So you could write your whole program there and the machine would hit the BIOS then the POST, then your Boot program and as long as you keep it from exiting back to bootmgfw then your program would be the one and only!
Quote:Original post by Steve_Segreto
I hate to sound asinine, but maybe you would have better luck accessing video memory directly if you used a hacked-up version of the Linux kernel. I think the linear frame buffer is still there on x86 machines at 0x0000a0000. It's also easier to write a device driver under Linux than Windows.

Short of that the only way your going to access the video memory directly is if you write your own operating system and/or your own video driver. Like one of the poster's above mentioned if you *DID* write your own video driver, the API to it would be your replacement to D3D (except for the fact that it would only work with one specific type of hardware, whereas D3D is generic enough to interface with 99.9% of existing video cards).

Or you could just roll your own mini-os that has no purpose except to access video memory. Another alternative would be to create a Windows program that was compiled using the Boot Environment Application subsystem, that gives pretty raw access to the video buffer (through Bg/Bl APIs) but is largely undocumented. So you could write your whole program there and the machine would hit the BIOS then the POST, then your Boot program and as long as you keep it from exiting back to bootmgfw then your program would be the one and only!


That sounds a bit complicated, and i don't really want to make a new OS. PixelToaster is fine for my needs, and is pretty fast (is it? I'm just making 3D lines right now :) )

And while in the topic of PixelToaster, it is created using Direct3D, right? And Direct3D is hardware accelerated on most windows machines, right? So, if this is true, then that means that anything i create with PixelToaster is hardware accelerated, right?
Quote:Original post by Phynix
And while in the topic of PixelToaster, it is created using Direct3D, right? And Direct3D is hardware accelerated on most windows machines, right? So, if this is true, then that means that anything i create with PixelToaster is hardware accelerated, right?
The presence or not of hardware acceleration in PixelToaster is really pretty meaningless when you are performing software rasterisation/ray-tracing [wink]

That said, PixelToaster is using D3D to present your surface under Windows, but even were it not, I highly doubt the blit operation could over-shadow your software rendering.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Quote:Original post by PhynixAnd while in the topic of PixelToaster, it is created using Direct3D, right? And Direct3D is hardware accelerated on most windows machines, right? So, if this is true, then that means that anything i create with PixelToaster is hardware accelerated, right?


As pointed out, hardware acceleration these days isnt about setting single pixels or blitting, it's about transforming geometry, rasterizing, texture lookups, shader execution and parallelizing the heck out of it. What's accelerated is pretty much everything you insist on doing yourself, so basically you're walking on your hands saying "but I'm wearing running shoes, so I _should_ be pretty fast".

As a hobby project this can be fun and teaching you a lot, but don't expect your hand-rasterized triangle to be even remotely as fast as one done with D3D or OpenGL.
f@dzhttp://festini.device-zero.de
Quote:Original post by Ravyne
One downside to PixelToaster is that it only supports 32bit ARGB and 128bit (4xfloat) ARGB color formats. Granted 32bit is pretty standard, even for software renderers, but going to 16bit color can essentially double your fill-rate for free.

Taking into account the conversion from the internal format (which will be most likely some kind of RGBA) to 16bit packed format, it's probably not going to have so much difference that it justifies the additional work and decreased image quality.
Well, if you were rendering into a 16bit back-buffer, then presumably you'd be working with 16bit source images, so the only conversion would be drawing windowed on a desktop with a different bit depth, and that's only a 1-time conversion per frame. If you're rendering fullscreen, then there's no conversion to speak of.

For straight blits with transparancy, and nearest-pixel scaling and rotation 16bit shouldn't impose any undue overhead in terms of pixel processing. Now, when you start handling alpha-blending or filtered sampling for rotation and scaling, then 16bit starts to get you in trouble because you have to unpack the color components, operate on them, and then repack them. Even that's not the end of the world though -- converting from a 5 or 6 bit color component to an 8bit component is only a binary and, two shifts, and a binary or. I wrote a template meta-program that generates optimal code for converting between such formats from simple format descriptions a while back.

If most of your rendering doesn't need this kind of filtering or alpha-blending then 16bit is probably still a win since you can process and write twice as many pixels per iteration, particularly with SIMD instructions.

throw table_exception("(? ???)? ? ???");

Quote:Original post by Ravyne
Well, if you were rendering into a 16bit back-buffer, then presumably you'd be working with 16bit source images, so the only conversion would be drawing windowed on a desktop with a different bit depth, and that's only a 1-time conversion per frame. If you're rendering fullscreen, then there's no conversion to speak of.
Do GPUs offer a 16-bit backbuffer these days? Neither of my two graphics cards seem to support a 16-bit backbuffer natively.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Quote:Original post by swiftcoder
Do GPUs offer a 16-bit backbuffer these days? Neither of my two graphics cards seem to support a 16-bit backbuffer natively.


I'm pretty sure they do, since older games still run, and it's definitely possible to set a 16-bit mode with DirectDraw. Perhaps it's emulated somehow, but it's probably done on the card in that case, and the transfer to video memory should be 16-bit, which halves the bandwidth.
Quote:Original post by Ravyne
16bit source images


He wants to build a 3D graphics library, and it will be easier and yield better quality to operate in an unpacked format internally. I think for many tasks, like shading, he will need to unpack the packed color arrays anyway.

This topic is closed to new replies.

Advertisement