• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
Nathan2222_old

Legacy code (what's it and how is related to c++)?

30 posts in this topic

I've been seeing this stuff everywhere and google isn't helping.
Quote from another thread:

. . .
On top of all that is simple momentum -- There's a legacy of C and C++ code in the games industry that's venerable, tested, and works on every relevant platform.

What is legacy code and how/why is c++ described as legacy code?
Is it because it's old?
1

Share this post


Link to post
Share on other sites

Legacy code basically just means old code that is still in use.
 
More often than not, the term is generally used to describe old code that could be done better using modern techniques, but that continues to be used possibly because the cost of replacing it (or the hardware it is running on) is initially high or because there is other code that depends on it, and nobody wants to risk breaking the dependency.

so that's why languages have a hard time being as popular as c++. It may be easier, maybe faster and requires less code but it isn't worth changing because the consequence outweighs the advantage.
-3

Share this post


Link to post
Share on other sites

As stated by Bjarne Stroustrup

 

"There are only two kinds of languages: the ones people complain about and the ones nobody uses".

 

The only reason why there is legacy C and C++ code is because C and C++ was common back then, still is common and will remain common in future (afterall, it is pretty much *the* standard language in software engineering).

If C# was around back then and it was deemed better than C++, then there would be lots of legacy C# code lying around.

 

A good example of this is Microsoft VB6. There is a tonne of legacy VB6 code lying around but unlike C++ the new VB.NET is not backwards compatible so developers are reimplementing it in current (more portable) languages (like C++ and Java).

 

That Stroustrup comment is spurious at best.  There are a large number of languages that never become popular but never really go away.  Smalltalk and LISP are too examples that jump to mind immediately.  Then there are tons of domain specific languages, which are almost by definition unpopular and used.  Haskell is probably a good example here, but you could also consider FORTRAN or MATLab in this area.

 

FORTRAN may not be fair on that list, as my understanding is, that bastard is uber popular amongst the math and engineering sets...

2

Share this post


Link to post
Share on other sites

Surely your comment just agreed with that quote.

Languages may never fully go away but the popular ones that get used 24/7 do naturally generate legacy code. So again, the sheer popularity of C and C++ is what has caused the influx of legacy code.

 

I honestly have never seen Smalltalk code in the wild (let alone legacy libs) but I come into contact with Lisp quite often via my use of emacs and admittedly I have seen some pretty crusty Lisp (even though it is a slightly different dialect) ;)

Edited by Karsten_
1

Share this post


Link to post
Share on other sites

As stated by Bjarne Stroustrup
 
"There are only two kinds of languages: the ones people complain about and the ones nobody uses".
 
The only reason why there is legacy C and C++ code is because C and C++ was common back then, still is common and will remain common in future (afterall, it is pretty much *the* standard language in software engineering).
If Java or .NET was around back then and it was deemed better than C++, then there would be lots of legacy code lying around in those languages.
 
A good example of this is Microsoft VB6. There is a tonne of legacy VB6 code lying around but unlike C++ the new VB.NET is not backwards compatible so developers are reimplementing it in current (more portable) languages (like C++ and Java).
 
In another 50 years, the original legacy C and C++ code will probably still work so what will it be called? Archaic? In my opinion, if legacy code works, is maintainable and well documented then it is great. It means that it was originally correctly written and used correct "futureproof" technology. Contrast this to code that you need to rewrite every few years because you keep choosing technology that becomes obsolete / dies (I am looking at you VB6!).

so backwards compatibility is a good thing? I heard C++ is also backwards compatible with C and that struct comes from C while the C++ equivalent is class.
1

Share this post


Link to post
Share on other sites
so backwards compatibility is a good thing? I heard C++ is also backwards compatible with C and that struct comes from C while the C++ equivalent is class.

 

Definitely. My main interest in the C++ language is the backwards compatibility.

If Java could support old C and C++ code and didn't need a JRE, I would be all over that mofo ;)

 

C++ is mostly backwards compatible with C (in that it is very easy to get C compiling with a C++ compiler). Objective-C also is backwards compatible with C. C++ is a little more strict than C in that it needs explicit casts etc.. but in general it is all good.

 

C++ has structs as well but the only difference between a struct and a class in C++ is that by default everything in it is public. A well designed C++ application can make use of both structs and classes.

 

I might add that although C++ is backwards compatible, the design of the code is different and should be treated as such. New(ish) features like custom deleter functions in smart pointers make dealing with C code much nicer however.

 

I think the best example I can think of is OpenGL which is a C library can be directly consumed by C++ code. Contrast this to using other languages where you have to use a wrapper (a wrapper is a large project to bind the native C code to another language). These things are notorious for becoming very unmaintained and out of sync with the latest version of the original library. Even Microsoft couldn't be arsed anymore with XNA (A fat binding for DirectX 9). Another example is the many Java OpenGL bindings. These things are platform specific (so you instantly lose one of the potential features of Java) and they are also a massive pain to rig up compared to just doing the whole thing in C++ and binding it at compile time (try writing a simple C++ OpenGL application and then try the same thing in Java and make up your own mind).

 

Plus, C and C++ can easily go the other way too. They can call, for example, a Java library or .NET library using libjni or libmono (or even Microsoft C++/clr). Trying to get Java code to call a .NET library is extremely fiddly.

 

That said, some of these features I have just ranted about are not always necessary for games development so it is largely irrelevant to most people. It still comes down to use the language you prefer and can get the game done quickest in ;)

Edited by Karsten_
2

Share this post


Link to post
Share on other sites

so backwards compatibility is a good thing? I heard C++ is also backwards compatible with C and that struct comes from C while the C++ equivalent is class.

Definitely. My main interest in the C++ language is the backwards compatibility.
If Java could support old C and C++ code and didn't need a JRE, I would be all over that mofo ;)
 
C++ is mostly backwards compatible with C (in that it is very easy to get C compiling with a C++ compiler). Objective-C also is backwards compatible with C. C++ is a little more strict than C in that it needs explicit casts etc.. but in general it is all good.
 
C++ has structs as well but the only difference between a struct and a class in C++ is that by default everything in it is public. A well designed C++ application can make use of both structs and classes.
 
I might add that although C++ is backwards compatible, the design of the code is different and should be treated as such. New(ish) features like custom deleter functions in smart pointers make dealing with C code much nicer however.
 
I think the best example I can think of is OpenGL which is a C library can be directly consumed by C++ code. Contrast this to using other languages where you have to use a wrapper (a wrapper is a large project to bind the native C code to another language). These things are notorious for becoming very unmaintained and out of sync with the latest version of the original library. Even Microsoft couldn't be arsed anymore with XNA (A fat binding for DirectX 9). Another example is the many Java OpenGL bindings. These things are platform specific (so you instantly lose one of the potential features of Java) and they are also a massive pain to rig up compared to just doing the whole thing in C++ and binding it at runtime (try writing a simple C++ OpenGL application and then try the same thing in Java and make up your own mind).
but doesn't backwards compatibility increase the complexity of a language? Supporting old features that can be replaced by new ones.
By putting those two options in one language, don't you increase it's complexity till it becomes so bad?
1

Share this post


Link to post
Share on other sites

Well C++ does have a deprecation system. For example std::auto_ptr<T>.

 

Also, if you don't use an old feature of a language. How could that effect your project negatively? Just because the language supports something, it doesn't mean you have to use it. Same with new features, I know a lot of developers I work with tend to overly consume language features because they are "new and cool".

 

Most C and C++ compilers also have a way to specifiy the standard to use via a compile time flag (i.e -std=C++11, -std=C++0x, -std=C99). It also has "non standard" standards support (i.e -std=gnu++0x) and even one that adds basic RAII into GNU C (cant recall the compile time flag).

 

What would you prefer? A slightly more complex language or one that a library you rely on no longer compiles because the language has changed?

Ironically using something like Java or .NET does not solve this because both these languages are compiled using C / C++ so a breakage in the underlying C language has a knock on effect on these entire platforms (i.e porting Java to a new platform is a massive task made almost impossible if the C++ language changes every year).

Edited by Karsten_
2

Share this post


Link to post
Share on other sites

Well C++ does have a deprecation system. For example std::auto_ptr<T>.
 
Also, if you don't use an old feature of a language. How could that effect your project negatively? Just because the language supports something, it doesn't mean you have to use it. Same with new features, I know a lot of developers I work with tend to overly consume language features because they are "new and cool".
 
Most C and C++ compilers also have a way to specifiy the standard to use via a compile time flag (i.e -std=C++11, -std=C++0x, -std=C99). It also has "non standard" standards support (i.e -std=gnu++0x) and even one that adds basic RAII into GNU C (cant recall the compile time flag).
 
What would you prefer? A slightly more complex language or one that a library you rely on no longer compiles because the language has changed?
Ironically using something like Java or .NET does not solve this because both these languages are compiled using C / C++ so a breakage in the underlying C language has a knock on effect on these entire platforms (i.e porting Java to a new platform is a massive task made almost impossible if the C++ language changes every year).

if it wasn't based of c++.
What's deprecation? Some #include's in c++ are deprecated.
0

Share this post


Link to post
Share on other sites

As far as my experience goes, the code is not called legacy because it is old, it is because no one supports it (and, therefore, no one understands properly how it works).

2

Share this post


Link to post
Share on other sites

As far as my experience goes, the code is not called legacy because it is old, it is because no one supports it (and, therefore, no one understands properly how it works).

so does that means they don't understand what runs their system?
If this is true, what is the benefit of having what you don't understand and isn't supported?
-1

Share this post


Link to post
Share on other sites

 

 

so backwards compatibility is a good thing? I heard C++ is also backwards compatible with C and that struct comes from C while the C++ equivalent is class.

Definitely. My main interest in the C++ language is the backwards compatibility.
If Java could support old C and C++ code and didn't need a JRE, I would be all over that mofo ;)
 
C++ is mostly backwards compatible with C (in that it is very easy to get C compiling with a C++ compiler). Objective-C also is backwards compatible with C. C++ is a little more strict than C in that it needs explicit casts etc.. but in general it is all good.
 
C++ has structs as well but the only difference between a struct and a class in C++ is that by default everything in it is public. A well designed C++ application can make use of both structs and classes.
 
I might add that although C++ is backwards compatible, the design of the code is different and should be treated as such. New(ish) features like custom deleter functions in smart pointers make dealing with C code much nicer however.
 
I think the best example I can think of is OpenGL which is a C library can be directly consumed by C++ code. Contrast this to using other languages where you have to use a wrapper (a wrapper is a large project to bind the native C code to another language). These things are notorious for becoming very unmaintained and out of sync with the latest version of the original library. Even Microsoft couldn't be arsed anymore with XNA (A fat binding for DirectX 9). Another example is the many Java OpenGL bindings. These things are platform specific (so you instantly lose one of the potential features of Java) and they are also a massive pain to rig up compared to just doing the whole thing in C++ and binding it at runtime (try writing a simple C++ OpenGL application and then try the same thing in Java and make up your own mind).
but doesn't backwards compatibility increase the complexity of a language? Supporting old features that can be replaced by new ones.
By putting those two options in one language, don't you increase it's complexity till it becomes so bad?

 

 

A (almost) compatibility with C is a feature of the C++ programming language, read more here: http://www.stroustrup.com/bs_faq.html#whyC

 

For the legacy code question: it exist for an ancestral and universal common law in the programming world that could be summarize in “if it works don't touch it”. Many people don't argue with it or they have slightly different forms (I honestly prefer this: "if it works  and it doesn't create big issues, don't even try touch it or I will cut your fingers!")...

A good example of legacy code in the game industry could be the use of DirectInput: DirectInput it's deprecated (last version came with DirectX SDK 8.x), but it still works well and for many games its limitations are not a big issues, plus you can use it to support all the HIDs (human interface devices) that are not supported by other APIs (like xinput)... DirectInput actually is used in most AAA games and game engines/SDK (Valve's Source, Crytek CryEngine, Torque 3D, Unreal Engine, etc...).

Edited by Alessio1989
0

Share this post


Link to post
Share on other sites

As far as my experience goes, the code is not called legacy because it is old, it is because no one supports it (and, therefore, no one understands properly how it works).

Agreed. Although I guess that as soon as a dependency our code relies on has had support dropped, no matter how new the code is, it is suddenly "legacy". This is why portable code is very important (especially nowadays where for some reason companies think experimental == modern)
 
Although I find it quite amusing how much of Microsoft's legacy technology these days is actually outliving its successors. I have already placed my bets on MFC actually outlasting WPF (a technology at least 3 generations newer).
 

so does that means they don't understand what runs their system?
If this is true, what is the benefit of having what you don't understand and isn't supported?

Because if it still works, then it is still useful. If it is an old crusty black box, then that is a shame but it still probably doesnt justify a rewrite.

We dont understand whats behind closed-source software and yet people are more than happy to use that.

I guess in some ways, closed-source software is deprecated as soon as you have purchased it because you cannot maintain the code yourself.

(Full disclosure, I am a BSD user/developer and advocate open-source for everything people do).

Edited by Karsten_
0

Share this post


Link to post
Share on other sites

As far as my experience goes, the code is not called legacy because it is old, it is because no one supports it (and, therefore, no one understands properly how it works).

Agreed. Although I guess that as soon as a dependency our code relies on has had support dropped, no matter how new the code is, it is suddenly "legacy". This is why portable code is very important (especially nowadays where for some reason companies think experimental == modern)

Although I find it quite amusing how much of Microsoft's legacy technology these days is actually outliving its successors. I have already placed my bets on MFC actually outlasting WPF (a technology at least 3 generations newer).

so does that means they don't understand what runs their system?
If this is true, what is the benefit of having what you don't understand and isn't supported?

Because if it still works, then it is still useful. If it is an old crusty black box, then that is a shame but it still probably doesnt justify a rewrite.
We dont understand whats behind closed-source software and yet people are more than happy to use that.
I guess in some ways, closed-source software is deprecated as soon as you have purchased it because you cannot maintain the code yourself.
(Full disclosure, I am a BSD user/developer and advocate open-source for everything people do).
it may not matter to the user but shouldn't it matter to the company that owns the software.
It's like that php blog i read: Everything is bad in it but just because it's all you've been using and come to see, you don't see it necessary to change.
And another one about how legacy code is caused: It's not a single programmer that causes it but a whole bunch of programmers that have come and seen that same weird code but thought, i know it definitely needs to be written but if it hasn't been changed, why should i change it.
The more you believe it works and keep trying to justify it, the more that gets added to it till it's too late to rewrite it without any significant consequence (except you find a language that uses one line to rewrite the whole program).

@alessio: the c++ fqa contradicts and almost condems everything in that faq.

I have no problem with closed source software. I am a happy windows user. I don't know what BSD is.
-1

Share this post


Link to post
Share on other sites

As far as my experience goes, the code is not called legacy because it is old, it is because no one supports it (and, therefore, no one understands properly how it works).

so does that means they don't understand what runs their system?
If this is true, what is the benefit of having what you don't understand and isn't supported?

There's no benefit, other than that the system works and will likely continue to work. The implication of this definition of "legacy" is that no one is left who understands the old code precisely enough to replace it with newer, better, cleaner code. Happens all the time -- some medium-sized utility at a large warehouse operation becomes critical to the business, developed by one lone programmer, and hacked to add new features over the years, and suddenly that programmer leaves the company, retires, or dies. The business relies on that software, but can't risk changing it and can't afford to hire a consultant to come in and analyze it to the necessary degree. You'd like to think that just any programmer can come in and read the code and know what's going on, but its not the case in reality. In reality there are all sorts of little assumptions and gotchas in most code bases (particularly of the type I describe here), and documentation, if it even exists, is often outdated for just plain wrong.

Yes, the business is precariously positioned to rely on such software, but what can they do?


There's somewhat less of this in the games industry, for example, but I recall reading in an article that Madden Football -- all the way up until its Xbox 360 and PS3 incarnations -- still had some C code in it that had originated in the Sega Genesis version. The first few Halo games, originally developed for the MAC, had a codebase that sort of hacked some C++-like features into their C codebase (The mac platform didn't have great support for C++ back in the day, and their frameworks to this day are written in (or at least exposed as) Objective-C rather than C++ or even vanilla C). I'd wager that the latest unreal engine has code stretching back to the original unreal.

Legacy code is just older code that's either difficult or unfeasible to replace, but which is nonetheless necessary to continue using. It's most often stated in relation to C++ just because C and C++ were among the first very popular, widely used languages. But there's a non-trivial amount of legacy COBOL and FORTRAN code running all kinds of finance and business institutions. In fact, if you understand either one along with the hardware and related software systems of the day as well as the old-timers did, and also a modern language well enough to re-implement it in a more-modern language, you can probably earn yourself a half-million-dollar salary or more through consulting work.
isn't it dangerous to have an important piece of sofware that your company depends on run on a codebase so complicated that even spaghetti can't describe it?
What if say after 10 years, someone comes and alters a single thing and it crashes the whole system and when they try to correct it, you find that there's so much copy and pasting and other problems that it is not possible to find the fault.
Won't you then have to rewrite the whole thing from scratch? And it will be more complicated than 10 years before.
Is this the problem facing healthcare.gov? It has 10 times more code than windows 7 and requires more than 2 times the code in unreal engine to be replaced.
0

Share this post


Link to post
Share on other sites

isn't it dangerous to have an important piece of sofware that your company depends on run on a codebase so complicated that even spaghetti can't describe it?
What if say after 10 years, someone comes and alters a single thing and it crashes the whole system and when they try to correct it, you find that there's so much copy and pasting and other problems that it is not possible to find the fault.
Won't you then have to rewrite the whole thing from scratch? And it will be more complicated than 10 years before.

 

These decisions aren't usually left to the people who understand why and how it's a problem. A friend was telling me about how they have 4 computers that are now 5 years old that run critical software that could cost them a lot of money if they break down and could take awhile to replace if they do break down. She recommended they get newer computers because she felt the risk of failure after 5 years was, while not huge, too high. They wouldn't pull the trigger, even though having new PCs would be a pretty small addition to a budget overall.

0

Share this post


Link to post
Share on other sites

isn't it dangerous to have an important piece of sofware that your company depends on run on a codebase so complicated that even spaghetti can't describe it?
What if say after 10 years, someone comes and alters a single thing and it crashes the whole system and when they try to correct it, you find that there's so much copy and pasting and other problems that it is not possible to find the fault.
Won't you then have to rewrite the whole thing from scratch? And it will be more complicated than 10 years before.

 
These decisions aren't usually left to the people who understand why and how it's a problem. A friend was telling me about how they have 4 computers that are now 5 years old that run critical software that could cost them a lot of money if they break down and could take awhile to replace if they do break down. She recommended they get newer computers because she felt the risk of failure after 5 years was, while not huge, too high. They wouldn't pull the trigger, even though having new PCs would be a pretty small addition to a budget overall.
that sounds risky. Like putting all your eggs in a spoilt basket
0

Share this post


Link to post
Share on other sites

Legacy code isn't always bad. It's just old. Changing old just because it's old isn't a good idea, especially on shipping products. For example, the code base of Guild Wars (and Guild Wars 2) doesn't use the standard C++ library for containers, et cetera. All the containers are hand-rolled. This was done (among other reasons) because in the formative years of the code base, the standard library wasn't very well implemented. The modern standard library is much better, and quite powerful, and while sure it would be nice to rip out all that old code and replace it with standard stuff... why? The existing code works and is tested, and introducing a change (especially on that scale, but not just on that scale) has a risk of breaking the functionality. 

 

Why take such a risk for no demonstrable benefit? Until there is a demonstrable benefit (and again, being "new" isn't one), taking that risk is usually a poor business and engineering decision.

 

 

Even legacy code that is bad carries that change risk. Bugs are routinely discovered in code that we've shipped; sometimes we don't fix them immediately because the ramifications of the change need analysis even if the change makes the code better from an engineering standpoint.

 

When you work on large projects with serious business implications, not every decision can be based on pure engineering.

 

Somebody mentioned Halo earlier on; Bungie still uses that very same code base in Destiny, and it's in the new Halo games as well. Much of has evolved over time but there are definitely files that retain much of the original code or comments from, for example, Aleph One's code (though you may need to go back a fair few years since the project has diverged).

 

There's little point (and lots of cost) to rewriting everything from scratch every time a studio starts a new game. It already takes man-decades of work to ship a AAA commercial title these days. We cannot afford to increase that.

2

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0