Advertisement Jump to content


This topic is now archived and is closed to further replies.


graphics programming: computer architecture?

This topic is 5839 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

hi all, about 2 years ago i made my first 2d directx game, which is basically a missile command clone. as i had windows 98 at the time of development, i didn't discover that the game crashed under windows 2000 until months after the initial release. unfortunatley it took me quite a while to get windows 2000 installed on my system, but now as i have i would like to release a working version of the game, cause it was well received in general. i'm planning to more or less start all over again from scratch, and take into account all the experience that i've gained during those years. back then i used 16-bit bitmaps, and i'll probably go for that again since it's relatively fast to render. however, a thought came to mind this morning about going for 32-bit instead. i'm not sure about this at all, so i will ask you more experienced programmers. which is faster when considering the fact that the cpu have to transfer this data back on forth, 16- or 32-bit? i guess it doesn't matter since the cpu will probably grab two 16-bit numbers making it the same as grabbing a 32-bit number, but one can never be too sure. especially since i don't know much at all about cpu and memory architecture. another aspect is when i have all my directx surfaces in video memory. does it make a difference on the relatively new graphic cards whether i choose 16- or 32-bit surfaces? again i don't know about actual gpu architecture. perhaps one format is to prefer over the other cause of hardware acceleration. on older machines 32-bit will definitely be slower considering the amount of data that need to be handled when doing the actual rendering to the backbuffer. if 32-bit is to prefer i will load my 16-bit bitmaps from disk, convert into 32-bit, and then render things like that. the reason for this is that i want to keep disk space used down, but rendering speed on new machines as fast as possible. it's not about image quality at all (in case this sound more complicated than necessary). after writing this i get the feeling that it really makes no difference, other than considerably more data need to be handled when going for 32-bit, eventually making it slower on all machines. another thing i'm considering is making a 8-bit version as well, so the ones with old and slow machines will be able to enjoy the game as well, but that's another topic for another day. thanks a million for sharing your experience, and explaining how the computer architecture deals with these things. have a good one! [edited by - en3my on January 23, 2003 9:50:27 AM]

Share this post

Link to post
Share on other sites
It would take only a few minutes to do some test blits in 16 and 32 bit. I''m not sure if blit speed is faster in 16 bit, might be, might not, might vary depending on hardware.

Either way though, a modern PC could easily cope with the speed.

The best option is to give the user the option.


Share this post

Link to post
Share on other sites

  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!