Jump to content
  • Advertisement
Sign in to follow this  
Tamior

Old Games

This topic is 4631 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, folks! :) I'm interested in how the old games (but still not text based but having graphical counterpart) were made. I mean what technics were used, did they code all this in asm? Anyone having interesting links on the topic plz share :)

Share this post


Link to post
Share on other sites
Advertisement
most of the source code for old games are gpl'ed "you can download the source to recompile" I know games like quake 2 used asm for rendering and timing but i don't if they had any thing else. But to answer your question i believe they used a mixed of c/c++/asm but i could be wrong

Share this post


Link to post
Share on other sites
Depends how long back you want to go:
* Very long ago, it was all ASM, with lookup tables for everything, even addition (Commodore 64)

* Then it was higher level languages, with ASM for everything requiring performance, and still plenty of lookup tables. (Atari/Amiga/PC)

* Then PC's took over and got faster, so ASM was only used in the most time critical inner loops, everything else was coded in high level languages. And lookuptabls were mostly used for ver expencive inner loop calculations and trigonometry functions.

* Then hardware accelerated graphic cards entered the scene and compilers got sophisticated enough to produce reasonably fast ASM code. This coupled with HW acceleration and faster PC speeds made hand optimized assembler programming obsolete. Thus it was that the beautiful art was forever lost to mankind.


Transport Tychoon, as an example of a fairly recent game in the whole picture, was written exclusively in ASM, and is still one damn fine piece of software.


It's still possible to hand write optimized ASM and get certain inner loops [edit]20 percent, not 20 times![/edit] faster than any compiler could ever produce. But for the most part it's no longer necessary since we have such immense computing power sitting in our little gray boxes.

[Edited by - Bad Maniac on November 13, 2005 4:02:15 PM]

Share this post


Link to post
Share on other sites
Try intruder I know you don't want text based but it great fun and has lots of reply value (try the church and the guys closet for hidden points and set fire to the preist to acces down stairs! can't find the source on it buts it's still fun!

Share this post


Link to post
Share on other sites
Quote:

It's still possible to hand write optimized ASM and get certain inner loops 20 times faster than any compiler could ever produce.


I have serious doubts about this. Do you have any real examples?

Share this post


Link to post
Share on other sites
Quote:
Original post by RDragon1
Quote:

It's still possible to hand write optimized ASM and get certain inner loops 20 times faster than any compiler could ever produce.


I have serious doubts about this. Do you have any real examples?
It's a bit off topic but I'll take any bragging opprotunities I can get..

I've recently spent some time on developing an image compressor using vector quantization for a realtime image decompression (essentially the software equivalent of hardware texture compression). The algorithm is highly asymmetrical, so while decompression (blitting) is blazingly fast the compression can be an overnight batch job.

In my compressor 99% of the time is spent on making a brute-force search matching a set of pixels from the source bitmap against a codebook.
To optimize it I switched to a counter-intuitive interlaced fixed-point datastructure, instead of a direct floating point implementation as seen in the article's example code. The result was more than a 15x improvement over the original C code.
And together with a few algorithmic improvements it's now possible to compress a 640x480 image in less than a second on fast machines =)

And, yes, I do realize that an intelligent compiler might be able to produce something similar to my code if given the same datastructures to work with. But this is kind of a moot point since I essentially had to write the assembly code to figure it out in the first place.

Share this post


Link to post
Share on other sites
Was this run on a modern computer and compiled a modern compiler?

Modern CPUs are very fast at floating point math, so I don't see how your omptimization would have helped (or why it couldn't have been done C).

Share this post


Link to post
Share on other sites
Quote:
Original post by RDragon1I have serious doubts about this. Do you have any real examples?
Not really related to the topic...

In almost any application where data can be operated on differently by different processors and/or instruction sets, the compiler can only ever optimize for the currently selected CPU, or a selected best overall. Hand written assembly has the advantage of multiple code paths, each highly optimized for it's specific target platform, selected at run time depending on available instruction sets.

Examples include software rendering where hand optimized assembly code is essential for getting real time performance even on todays massively fast machines.
On almost all performance intensive applications such as audio or video editing software, emulators and realtime raytracers, you still find large blocks of pure assembly, simply because no other method gave the same performance benefits. And I'm willing to bet Epic still has a few choice lines of ASM in their Unreal engines.

I have co-written a software 2D blitting library, and we have seen speed improvements of sometimes 100 times from a standard C implementetion with all optimization options enabled in the compiler compared to hand written asembly versions of the same blitters.

You always know better what your code is doing than the compiler, and can thus optmize better for the task at hand. Furthermore, you can view the compiler output assembly code, and improve on it, while the compiler can't read your mind and figure out how to best optimize your application.

Share this post


Link to post
Share on other sites
Quote:
Original post by Daniel MillerModern CPUs are very fast at floating point math
Modern computers are even faster at integer math, and fixed point is far more precise than floating point. So the optimization might not have been solely for performance benefits.

But this is getting largely off topic =/

Share this post


Link to post
Share on other sites
Quote:
Original post by Daniel Miller
Was this run on a modern computer and compiled a modern compiler?

Modern CPUs are very fast at floating point math, so I don't see how your omptimization would have helped (or why it couldn't have been done C).
Yes, it was run primarily on my P4. And the C version was compiled with MSVC 6 and 2k5.
My first instinct was the same as yours, especially since the algorithm relies heavily on multiplications which are very slow on the normal integer unit of a P4.
However by using 128-bit MMX and 16-bit words I was able to process and store twice as many words, and unrolling the loop really helped in hiding latencies (you also get a free addition with the pmaddwd instruction). Additionally I was able to insert an 8-bit index in the low byte of the resulting 32-bit integer, which really helped with keeping track of the best match.

So I really *had* to write it in assembly language to figure out how to format it's input.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!