# Why is low level programming used to create great games?

## Recommended Posts

Nicholas Kong    1535

Why deal with binary and assembly to create great games? What is the result the programmer get out of performing low level programming on the games?

##### Share on other sites
Ludus    1020

Handcrafted assembly was used to create games for older systems such as the NES and SNES. It was necessary to write the code in assembly since the processors of those systems were slow. By writing the assembly code by hand, the programmers could make efficient use of every last bit of processing power through various methods and tricks. As previously mentioned, this is no longer needed to create games since processors have gotten faster and compilers have gotten better at optimizing high level code when translating it to machine language.

Edited by Ludus

##### Share on other sites
Buster2000    4310

Assembly isn't really used for programming games anymore other than to debug some really tricky bugs.  Modern compilers can optomise better than hand coded assembly.  Also the assembly that was optimal 10 years ago may not be optimal on a newer processor.  Even for stuff like SSE it is more common to use intrinsics.

##### Share on other sites
Titan.    159

To complete the previous answers most of the time you don't need to write assembly (and likely never), but like any other domain that need high performance, you DO need to know how most of it works to perform some optimization.

Edited by Titan.

##### Share on other sites
Norman Barrows    7179

last time i has to go to the metal was about 1993, assembly code for a real time blitter that did blit, mirror, zoom, and rotate simultaneously.  As i recall, this was with Watcom C++, and the target PC was a 386, or was it a 486? it was one routine, about 1 and a half screens of code. the phreaker on the team wrote it in about a week. i spent another week tuning it for max performance.

Edited by Norman Barrows

##### Share on other sites
kburkhart84    3182

Well, I should just add that for this discussion I think that C++/C would be considered a lower level language as well, considering that most AAA games/engines are written with it.  The reason for this is that you get the control and speed, and moreso the lower you go, with Assembly being the ultimate in control and speed, though as stated above that has changed with the modernization of compilers.

That doesn't mean development speed is faster.  A game can usually get created much faster in something like UDK/Unity/Shiva/GameMaker than by programming it in low level languages.  But, this assumes that one of those programs/frameworks has the requirements you need for your game, and you know how to use them.  For most indie games, that is the case, but for AAA studios, they want/need lower level access to things that most indies just don't need.  For example, Unity's scripting as pretty fast, relatively speaking, but it won't be as fast as well coded C++.  It will beat badly coded anything though :)  Some AAA games really need that speed, for example because they would like to release for as low-grade of hardware as possible, while not loosing any gameplay(which in general many AAA games still suck at).

In reality, for us indies, the best solution is something like Unity or GameMaker most of the time.  We don't usually need to lower level speed of C++, rather we need lower development times.  Our games aren't going to beat out AAA games in sales, so to survive, we need to make more and more fun games, without spending too long on the art assets required for AAA engines.  And then, the creators of Unity and GameMaker are both smart enough to include ways to use external code created outside of the software.  The interfaces sometimes are limited, like GameMaker being limited to char* strings and doubles when sending/receiving from DLLS, but from that you can do almost anything, and the same applies to Unity's plugin thingy.  So in this end, us indies get the best of both fast development and access to lower level languages.

##### Share on other sites
Nicholas Kong    1535

Being able to read assembly is a valuable skill to have when debugging any kind of system-level software.

Quite a few times I've seen someone exclaiming "Argh, why is my [insert complex C++ code] crashing here", when a veteran swoops in with a hexadecimal memory and assembly code viewer, tracing through the code to point out a now-obvious point of failure....

Not sure what you mean with "binary". Binary file formats (i.e. not text / human-readable file formats) are used for efficiency. I can parse a binary model file in a several microseconds, compared to several milliseconds (or seconds) for an XML file

Also, all of these details -- low-level stuff -- belongs in the inner workings of game engines.

Actual games are often not written in a low-level language, because they don't have to be. Games are often written in "productivity" languages, like Lua or Python or C#, to lessen the required time/budget required to build them.

Only the really performance critical parts (i.e. the engine) are worth spending an excessive amount of time working on. Lower level programming is more verbose and explicit, which lets you ensure that things are occurring in a very precise way (often for performance), but the flip-side is that this requires more time, and more experienced staff.

When you're calculating 10,000 animated transform matrices (an engine task) then that becomes very computationally expensive -- it takes quite a bit of CPU time, which means it has an impact on the FPS counter. This means that you start to care about clock cycles, branch mispredictions and L1 cache behaviour, which are issues that low-level languages allow you to address.

When you're deciding which animation a dozen characters should choose next (a game task) -- this doesn't take a lot of CPU time -- then you don't care about micro-optimizing all those low-level details to gain back an extra nanosecond of time, so you're better off using whatever language makes it easiest to solve the problem in an elegant way.

by binary I mean base two numbers: 0 and 1

##### Share on other sites
Nicholas Kong    1535

Well, I should just add that for this discussion I think that C++/C would be considered a lower level language as well, considering that most AAA games/engines are written with it.  The reason for this is that you get the control and speed, and moreso the lower you go, with Assembly being the ultimate in control and speed, though as stated above that has changed with the modernization of compilers.

That doesn't mean development speed is faster.  A game can usually get created much faster in something like UDK/Unity/Shiva/GameMaker than by programming it in low level languages.  But, this assumes that one of those programs/frameworks has the requirements you need for your game, and you know how to use them.  For most indie games, that is the case, but for AAA studios, they want/need lower level access to things that most indies just don't need.  For example, Unity's scripting as pretty fast, relatively speaking, but it won't be as fast as well coded C++.  It will beat badly coded anything though   Some AAA games really need that speed, for example because they would like to release for as low-grade of hardware as possible, while not loosing any gameplay(which in general many AAA games still suck at).

In reality, for us indies, the best solution is something like Unity or GameMaker most of the time.  We don't usually need to lower level speed of C++, rather we need lower development times.  Our games aren't going to beat out AAA games in sales, so to survive, we need to make more and more fun games, without spending too long on the art assets required for AAA engines.  And then, the creators of Unity and GameMaker are both smart enough to include ways to use external code created outside of the software.  The interfaces sometimes are limited, like GameMaker being limited to char* strings and doubles when sending/receiving from DLLS, but from that you can do almost anything, and the same applies to Unity's plugin thingy.  So in this end, us indies get the best of both fast development and access to lower level languages.

what does badly coded mean? is it determined by coding syle or not optimized code?

##### Share on other sites
kburkhart84    3182

what does badly coded mean? is it determined by coding syle or not optimized code?

I mean more by an optimization thing, as in using the wrong algorithm for something, recreating textures every frame, doing a one-time thing over and over in a loop, etc...  I know one of those things is actually a graphic API kind of thing, but if you were using a higher level system, it would handle the textures for you, so in that sense it is a valid example.

Coding style is another topic, and generally doesn't affect run-time speed, though it can affect speed of development, especially if there are more than a single programmer with different styles, and also if it is code that is going to have to be changed later.

##### Share on other sites
Pink Horror    2459

Assembly isn't really used for programming games anymore other than to debug some really tricky bugs.  Modern compilers can optomise better than hand coded assembly.  Also the assembly that was optimal 10 years ago may not be optimal on a newer processor.  Even for stuff like SSE it is more common to use intrinsics.

Working with intrinsics in C++ is "low level programming" in my book, which is what the OP asked about. I never really write assembly, but I have to read assembly and write code with a target assembly change in mind. I use intrinsics often. I fairly recently had to throw intrinsics all over the place to help out some math-heavy functions like decompression and, to quote Hodgman, "deciding which animation a dozen characters should choose next (a game task)". Well, it isn't always a dozen characters, though I'm sure on some frames exactly a dozen characters do need to pick new animations. The meat of the decision-making is an engine task. The game is responsible for filling out an animation query structure.

Well, for the other part of the OP's question, I never type stuff out in 0s and 1s. I am someone who goes over to someone else's desk and opens up the assembly view and the hex memory view to figure out what is happening, if that counts as working in binary.

##### Share on other sites
Hodgman    51220

by binary I mean base two numbers: 0 and 1

There is no task where you'd ever be looking at a screen full of zeros and ones.

Numbers are numbers, and the way we read or write them is where the base comes in.
You can write fifteen as 15 (base ten),  F (base sixteen), 17 (base eight), 1111 (base two), etc.. but they're all just different ways of looking at the same thing.
In low-level programming, you'd be much more likely to use hexadecimal (base sixteen) than binary (base two) when trying to visualize arbitrary data.

The fact that computers use base-two internally (binary digital systems, transistors that are "on" or "off") is an economical issue. The implications of this economical choice is that our maximum values are usually two-to-the-power-of-something.
e.g. on a 32 bit system, it's common for a hardware register to be able to represent 232 different values.
In decimal (base 10), this just looks like some arbitrary string of digits:
232-1 429496729510
In binary (base 2), it makes sense, but it's extremely long and hard to work with:
232-1 111111111111111111111111111111112
So instead, we often use a base that is itself a power of two, such as hexadecimal (base 16), which makes these power-of-two numbers appear neat, but is much more compact:
232-1 FFFFFFFF16

A common task where this might be applicable is where you're using a 32-bit hardware register to store 32 different boolean values.
Each boolean can be represented by 2x, where x is 0 through to 31.
So if you wanted boolean #0, #3 and #7 to be true, your register would hold the number:
20 + 23 + 27 == 1 + 8 + 128 == 137
13710 looks arbitrary.
000000000000000000000000100010012 makes sense, but is long
0000008916 makes sense when you get used to using hex (116 + 816 + 8016 == 8916)

The natural pattern in hex starts to become more apparent if you look at a table of the powers of two:

b10  b16  b2
1      1   1
2      2   10
4      4   100
8      8   1000
16    10   10000
32    20   100000
64    40   1000000
128   80   10000000
256  100   100000000

I think the Hodgster has invented a new notation whereby 2^x = 2x - 1

hahahah, oops. I had a brainfart.
Yes, if there's 232 possible values, then the maximum one is 232-1. Fixed my post.

Edited by Hodgman

##### Share on other sites

What about debugging 2 colour palette fonts or bitmaps? ;)

##### Share on other sites
Pink Horror    2459

by binary I mean base two numbers: 0 and 1

There is no task where you'd ever be looking at a screen full of zeros and ones.

Numbers are numbers, and the way we read or write them is where the base comes in.

You can write fifteen as 15 (base ten),  F (base sixteen), 17 (base eight), 1111 (base two), etc.. but they're all just different ways of looking at the same thing.

In low-level programming, you'd be much more likely to use hexadecimal (base sixteen) than binary (base two) when trying to visualize arbitrary data.

The fact that computers use base-two internally (binary digital systems, transistors that are "on" or "off") is an economical issue. The implications of this economical choice is that our maximum values are usually two-to-the-power-of-something.

e.g. on a 32 bit system, a common maximum value would be 2^32

In decimal (base 10), this just looks like some arbitrary string of digits:

2^32 429496729510

In binary (base 2), it makes sense, but it's extremely long and hard to work with:

2^32 111111111111111111111111111111112

So instead, we often use a base that is itself a power of two, such as hexadecimal (base 16), which makes these power-of-two numbers appear neat, but is much more compact:
2^32 FFFFFFFF16

I could have sworn that the powers of 2 are even.

Edited by Pink Horror

##### Share on other sites

I think the Hodgster has invented a new notation whereby 2^x = 2x - 1

EDIT: Although that reminds me of someone who once tried to tell me that 2N - 1 was prime for all integers N > 2. Yeah, 15, 255, 65535, totally prime those numbers ;)

EDIT2: It's not even true for N prime either, 211 - 1 = 2047 = 23 * 89

EDIT3: Hodgman's quote broke my sup? It could be a new notation, just no-one knows about it yet

2^x = 2x - 1

(note the caret)

##### Share on other sites

Why deal with binary and assembly to create great games? What is the result the programmer get out of performing low level programming on the games?

Binary and assembly is what computers reads/write. English/Spanish/whatever is what humans read/write.

Programming languages are halfway between. We convert our ideas and thoughts into programming languages, then we have the computer convert the programming language (using 'interpreter' or 'compiler' software) in assembly, and then the computer executes the assembly when asked to. We hardly ever write directly in assembly.

As for binary, everything in computers at a hardware level is stored in binary. All your MP3s, images, text files, everything, is stored on harddrives or in RAM in binary. Binary is just a way of representing data. How that data is interpreted is what really matters.

As for "low level programming", "low level" is a relative term. Games can be written in any of hundreds of different languages, but as far as the triple-A videogame industry for console games go, usually C++ is chosen. C++ is often considered "low level". The reason why low-level languages like C++ are used is because of performance reasons. C++ can be harder to develop with, but it can be optimized further than many other languages, and this optimization was required for the weaker hardware that existed in consoles and PCs to pump out beautiful games. This is slightly less of an issue today, as other languages are fairly fast also, and our hardware is much better, but it's still an issue that many game development studios have to consider. Especially when consoles usually have way inferior hardware to PCs, and especially when PCs have to work around more layers of OS software between your program and the hardware, and especially when the average gamer isn't satisfied with last year's graphics, and want things to look even better (because of marketing, and because game studios try to one-up each other in graphics to generate sales).

Edited by Servant of the Lord

##### Share on other sites
kunos    2254

by binary I mean base two numbers: 0 and 1

elegant way.

really?

And you REALLY believe somebody is programming with 0 and 1?

##### Share on other sites
TheChubu    9446

What about debugging 2 colour palette fonts or bitmaps? ;)

You parse the 1s and 0s as "white" and "black" so you don't have to look at them directly in the eye.

##### Share on other sites
deftware    1778

The only experience I have with assembly is in a disassembler, when I want to make another program do something it wasn't made to do.. I'd hand-craft new opcodes, to either overwrite existing ones, or insert in 'code caves' (a bunch of nop codes) and jump to them. Then I'd slap that hex in some process-patching code, and play cstrike beta with transparent walls.

##### Share on other sites

The only place where I use assembly is in Mega Drive homebrew o_o (no way I'd touch C or anything higher level on that thing) I don't think I ever touched assembly at all on PCs except back when I tried to make stuff for DOS.

I swear, I thought this was going to be about why developers still use C++ instead of Phyton, Lua, Javascript or something like that.

##### Share on other sites
Nicholas Kong    1535

There is no task where you'd ever be looking at a screen full of zeros and ones.

Numbers are numbers, and the way we read or write them is where the base comes in.
You can write fifteen as 15 (base ten), F (base sixteen), 17 (base eight), 1111 (base two), etc.. but they're all just different ways of looking at the same thing.
In low-level programming, you'd be much more likely to use hexadecimal (base sixteen) than binary (base two) when trying to visualize arbitrary data.

The fact that computers use base-two internally (binary digital systems, transistors that are "on" or "off") is an economical issue. The implications of this economical choice is that our maximum values are usually two-to-the-power-of-something.
e.g. on a 32 bit system, it's common for a hardware register to be able to represent 232 different values.
In decimal (base 10), this just looks like some arbitrary string of digits:
232-1 = 429496729510
In binary (base 2), it makes sense, but it's extremely long and hard to work with:
232-1 = 111111111111111111111111111111112
So instead, we often use a base that is itself a power of two, such as hexadecimal (base 16), which makes these power-of-two numbers appear neat, but is much more compact:
232-1 = FFFFFFFF16

Oh I see. That's pretty neat. I never thought of it like that. Good to know, hex is used more than binary.

##### Share on other sites
Nicholas Kong    1535

As for "low level programming", "low level" is a relative term. Games can be written in any of hundreds of different languages, but as far as the triple-A videogame industry for console games go, usually C++ is chosen. C++ is often considered "low level". The reason why low-level languages like C++ are used is because of performance reasons. C++ can be harder to develop with, but it can be optimized further than many other languages, and this optimization was required for the weaker hardware that existed in consoles and PCs to pump out beautiful games.

Interesting! This was the answer I was looking for too!

What kind C++ do as far as optimizing further? Is it necessarily to improve the graphics?

Edited by warnexus

##### Share on other sites

Looks like subscripts and superscripts don't work in quotes then.

I was going to post a bug about it in Comments and Suggestions, but that doesn't seem to be working either (or it is taking an incredibly long time, I got bored and gave up after a while). Ho hum.

##### Share on other sites

What kind C++ do as far as optimizing further? Is it necessarily to improve the graphics?

Many higher level languages put in alot of safeguards to protect you (this is good!), but those safeguards can slow things down slightly.
When you're trying to get a billion triangles on-screen and handle complex AI while processing realistic physics on underpowered consoles, every little bit matters. Especially when that 'little bit' is called millions or billions of times each frame.

For the average indie game developer such as myself, this isn't an issue and any language they choose is fine.
The general rule of thumb for game developers is don't optimize prematurely - that is to say, don't worry about performance until it actually really truly is an issue. (Even so, you still want to plan for what you know will already be coming down the road in terms of your game's requirements)

• Minecraft is written in Java.
• Eve Online has its core written in C++, but then uses alot of Python. Same with Disney's Pirate MMO.
• Most Android games are written in Java
• Most iOS games are written in Objective-C

So it's not that "Most great games use C++" (as the title of this thread implies). It's "Most triple-A console games use C++ (or C)".

C++ just doesn't protect you like other languages do, and because it doesn't, it can take certain shortcuts to optimize things. These shortcuts add up, but only if your game is already doing a crazy amount of work.

Most newer programmers start to use a language (even C++), and their game starts to go slow, and they ask, "Why is [the language I am using] slow?" or, "Why is [the library I am using] slow?". The question they should ask, anytime their game is going slow, is, "Why is the code I personally wrote. that just happens to use [language] and [library], slow?", because it's almost always them writing their code wrong, and not the language or library.

The quality of your code and software architecture impacts your performance way more than your choice of language or choice of library. BUT! If you are already an excellent programmer and already pushing things very close to the limit, then the language and library choice can add a small boost of extra performance that may come in handy for the next Modern Warfare or Crysis game.

But if you're not already architecturing your game right, or if you are making common mistakes in your code, the difference in speed when switching between languages won't have much of an effect compared to the great gain in speed someone gains from learning to code properly (which is some gradually learned through years of experience).

My code architecture is a weak point that I would benefit alot by studying more. At the per-function level, I'm satisfied. At my per-class level of architecture, I could learn some better design. And at the per-game level of architecture, my software engineering/architecture skills are severely lacking.

Thankfully, the game I am making doesn't require the same level of performance as Halo 9 and Call of Duty Eleventeen. Any slowdown in my code I know is because my code isn't fast enough. This is fine, because I'm a hobbyist game programmer trying to become a professional indie game developer, not a major studio with a \$500 million budget. I wouldn't even be able to make or purchase the quality of art necessary to require that kind of performance.

Everytime I say, "the compiler is broken", I end up find out that it was really my code that was wrong. Every time*.

Everytime I think, "the language is too slow", it's almost certainly my own code that is doing something wrong.

*Conversely, the few times the compiler actually was broken, I thought it was my code.

Edited by Servant of the Lord