Accessing video memory in c++

Started by
33 comments, last by phresnel 14 years, 8 months ago
Quote:Original post by Erik Rufelt
Perhaps it's emulated somehow, but it's probably done on the card in that case, and the transfer to video memory should be 16-bit, which halves the bandwidth.


Or it happens in the driver, which then sends 32 bit data over to the graphics card.

I read somewhere that 16bit is actually slower than 32bit on then current hardware.

Performance-wise it's the question wether the improved transfer rate outweighs the greater renderer complexity. I doubt (based on the observation that software-decoded video runs smooth on even older PCs) that transfer rate is that much of an issue today that it's worth bothering to implement the 16bit path.
Advertisement
I doubt that's possible, since your program can get the backbuffer videomemory mapped and accessible as a uint *screen32 or a ushort *screen16. Filling will always be twice as fast (memcpy with half the bytes).

That 16-bit issue is only for hardware rendering, not for transfer from RAM to VRAM.

Even using D3D optimized driver transfer at 1920x1200 on PCI-e I get a transfer time of 4 ms just copying 32-bit image data to the GPU.
Quote:Original post by phresnel
Quote:Original post by stonemetal
Or better yet look at the code in SDL and pixeltoaster(if it is open) and aim for as low a level as possible.


Why would that be better?


He wants to work on implementing his own graphics library so telling him hey go use someone else's graphics library is a bit stupid. Looking at an existing graphics library to see how you get at the hardware directly may not directly solve his problem but it points him in the right direction.

Quote:And while in the topic of PixelToaster, it is created using Direct3D, right? And Direct3D is hardware accelerated on most windows machines, right? So, if this is true, then that means that anything i create with PixelToaster is hardware accelerated, right?
Only to the extent that you make use of hardware acceleration. If you don't enable hardware accelerated lighting then guess what isn't hardware accelerated.
Quote:Original post by stonemetal
Quote:Original post by phresnel
Quote:Original post by stonemetal
Or better yet look at the code in SDL and pixeltoaster(if it is open) and aim for as low a level as possible.
Why would that be better?
He wants to work on implementing his own graphics library so telling him hey go use someone else's graphics library is a bit stupid.
The OP wants to implement a software rasteriser - I fail to see how using a library to abstract framebuffer access negatively affects that goal?

That is a bit like saying "I want to implement a loader for a new file format, so I will start by writing a hard disk driver"...

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Quote:Original post by swiftcoder
Quote:Original post by stonemetal
Quote:Original post by phresnel
Quote:Original post by stonemetal
Or better yet look at the code in SDL and pixeltoaster(if it is open) and aim for as low a level as possible.
Why would that be better?
He wants to work on implementing his own graphics library so telling him hey go use someone else's graphics library is a bit stupid.
The OP wants to implement a software rasteriser - I fail to see how using a library to abstract framebuffer access negatively affects that goal?

That is a bit like saying "I want to implement a loader for a new file format, so I will start by writing a hard disk driver"...


Yup. To emphasize this: The OP seems not to be interested in device driver development, at least not yet.

This topic is closed to new replies.

Advertisement