Quote:Original post by Erik Rufelt
Perhaps it's emulated somehow, but it's probably done on the card in that case, and the transfer to video memory should be 16-bit, which halves the bandwidth.
Or it happens in the driver, which then sends 32 bit data over to the graphics card.
I read somewhere that 16bit is actually slower than 32bit on then current hardware.
Performance-wise it's the question wether the improved transfer rate outweighs the greater renderer complexity. I doubt (based on the observation that software-decoded video runs smooth on even older PCs) that transfer rate is that much of an issue today that it's worth bothering to implement the 16bit path.