Jump to content
  • Advertisement
  • 12/09/18 10:16 PM

    The Faster You Unlearn OOP, The Better For You And Your Software

    General and Gameplay Programming

    GameDev.net
    Quote

    Object-oriented programming is an exceptionally bad idea which could only have originated in California.
      - Edsger W. Dijkstra

     

     

    Maybe it's just my experience, but Object-Oriented Programming seems like a default, most common paradigm of software engineering. The one typically thought to students, featured in online material and for some reason, spontaneously applied even by people that didn't intend it.

    I know how succumbing it is, and how great of an idea it seems on the surface. It took me years to break its spell, and understand clearly how horrible it is and why. Because of this perspective, I have a strong belief that it's important that people understand what is wrong with OOP, and what they should do instead.

    Many people discussed problems with OOP before, and I will provide a list of my favorite articles and videos at the end of this post. Before that, I'd like to give it my own take.

     

    Data is more important than code

    At its core, all software is about manipulating data to achieve a certain goal. The goal determines how the data should be structured, and the structure of the data determines what code is necessary.

    This part is very important, so I will repeat.

    Quote

    goal -> data architecture -> code

    One must never change the order here! When designing a piece of software, always start with figuring out what do you want to achieve, then at least roughly think about data architecture: data structures and infrastructure you need to efficiently achieve it. Only then write your code to work in such architecture. If with time the goal changes, alter the architecture, then change your code.

    In my experience, the biggest problem with OOP is that encourages ignoring the data model architecture and applying a mindless pattern of storing everything in objects, promising some vague benefits. If it looks like a candidate for a class, it goes into a class. Do I have a Customer? It goes into class Customer. Do I have a rendering context? It goes into class RenderingContext.

    Instead of building a good data architecture, the developer attention is moved toward inventing “good” classes, relations between them, taxonomies, inheritance hierarchies and so on. Not only is this a useless effort. It's actually deeply harmful.

     

    Encouraging complexity

    When explicitly designing a data architecture, the result is typically a minimum viable set of data structures that support the goal of our software. When thinking in terms of abstract classes and objects there is no upper bound to how grandiose and complex can our abstractions be. Just look at FizzBuzz Enterprise Edition  – the reason why such a simple problem can be implemented in so many lines of code, is because in OOP there's always a room for more abstractions.

    OOP apologists will respond that it's a matter of developer skill, to keep abstractions in check. Maybe. But in practice, OOP programs tend to only grow and never shrink because OOP encourages it.

     

    Graphs everywhere

    Because OOP requires scattering everything across many, many tiny encapsulated objects, the number of references to these objects explodes as well. OOP requires passing long lists of arguments everywhere or holding references to related objects directly to shortcut it.

    Your class Customer has a reference to class Order and vice versa. class OrderManager holds references to all Orders, and thus indirectly to Customer's. Everything tends to point to everything else because as time passes, there are more and more places in the code that require referring to a related object.

    Quote

    Instead of a well-designed data store, OOP projects tend to look like a huge spaghetti graph of objects pointing at each other and methods taking long argument lists. When you start to design Context objects just to cut on the number of arguments passed around, you know you're writing real OOP Enterprise-level software.

     

    Cross-cutting concerns

    The vast majority of essential code is not operating on just one object – it is actually implementing cross-cutting concerns. Example: when class Player hits() a class Monster, where exactly do we modify data? Monster's hp has to decrease by Player's attackPower, Player's xps increase by Monster's level if Monster got killed. Does it happen in Player.hits(Monster m) or Monster.isHitBy(Player p). What if there's a class Weapon involved? Do we pass it as an argument to isHitBy or does Player has a currentWeapon() getter?

    This oversimplified example with just 3 interacting classes is already becoming a typical OOP nightmare. A simple data transformation becomes a bunch of awkward, intertwined methods that call each other for no reason other than OOP dogma of encapsulation. Adding a bit of inheritance to the mix gets us a nice example of what stereotypical “Enterprise” software is about.

     

    Object encapsulation is schizophrenic

    Let's look at the definition of Encapsulation:

    Quote

    Encapsulation is an object-oriented programming concept that binds together the data and functions that manipulate the data, and that keeps both safe from outside interference and misuse. Data encapsulation led to the important OOP concept of data hiding.

    The sentiment is good, but in practice, encapsulation on a granularity of an object or a class often leads to code trying to separate everything from everything else (from itself). It generates tons of boilerplate: getters, setters, multiple constructors, odd methods, all trying to protect from mistakes that are unlikely to happen, on a scale too small to mater. The metaphor that I give is putting a padlock on your left pocket, to make sure your right hand can't take anything from it.

    Don't get me wrong – enforcing constraints, especially on ADTs is usually a great idea. But in OOP with all the inter-referencing of objects, encapsulation often doesn't achieve anything useful, and it's hard to address the constraints spanning across many classes.

    In my opinion classes and objects are just too granular, and the right place to focus on the isolation, APIs etc. are “modules”/“components”/“libraries” boundaries. And in my experience, OOP (Java/Scala) codebases are usually the ones in which no modules/libraries are employed. Developers focus on putting boundaries around each class, without much thought which groups of classes form together a standalone, reusable, consistent logical unit.

     

    There are multiple ways to look at the same data

    OOP requires an inflexible data organization: splitting it into many logical objects, which defines a data architecture: graph of objects with associated behavior (methods). However, it's often useful to have multiple ways of logically expressing data manipulations.

    If program data is stored e.g. in a tabular, data-oriented form, it's possible to have two or more modules each operating on the same data structure, but in a different way. If the data is split into objects with methods it's no longer possible.

    That's also the main reason for Object-relational impedance mismatch. While relational data architecture might not always be the best one, it is typically flexible enough to be able to operate on the data in many different ways, using different paradigms. However, the rigidness of OOP data organization causes incompatibility with any other data architecture.

     

    Bad performance

    Combination of data scattered between many small objects, heavy use of indirection and pointers and lack of right data architecture in the first place leads to poor runtime performance. Nuff said.

     

    What to do instead?

    I don't think there's a silver bullet, so I'm going to just describe how it tends to work in my code nowadays.

    First, the data-consideration goes first. I analyze what is going to be the input and the outputs, their format, volume. How should the data be stored at runtime, and how persisted: what operations will have to be supported, how fast (throughput, latencies) etc.

    Typically the design is something close to a database for the data that has any significant volume. That is: there will be some object like a DataStore with an API exposing all the necessary operations for querying and storing the data. The data itself will be in form of an ADT/PoD structures, and any references between the data records will be of a form of an ID (number, uuid, or a deterministic hash). Under the hood, it typically closely resembles or actually is backed by a relational database: Vectors or HashMaps storing bulk of the data by Index or ID, some other ones for “indices” that are required for fast lookup and so on. Other data structures like LRU caches etc. are also placed there.

    The bulk of actual program logic takes a reference to such DataStores, and performs the necessary operations on them. For concurrency and multi-threading, I typically glue different logical components via message passing, actor-style.  Example of an actor: stdin reader, input data processor, trust manager, game state, etc. Such “actors” can be implemented as thread-pools, elements of pipelines etc. When required, they can have their own DataStore or share one with other “actors”.

    Such architecture gives me nice testing points: DataStores can have multiple implementations via polymorphism, and actors communicating via messages can be instantiated separately and driven through test sequence of messages.

    The main point is: just because my software operates in a domain with concepts of eg. Customers and Orders, doesn't mean there is any Customer class, with methods associated with it. Quite the opposite: the Customer concept is just a bunch of data in a tabular form in one or more DataStores, and “business logic” code manipulates the data directly.

     

    Follow-up read

    As many things in software engineering critique of OOP is not a simple matter. I might have failed at clearly articulating my views and/or convincing you. If you're still interested, here are some links for you:

     

    Feedback

    I've been receiving comments and more links, so I'm putting them here:

     

    Note: This article was originally published on the author's blog, and is republished here with kind permission.



      Report Article


    User Feedback




    Pff... Is this OOP paradigm you blaming? Bad architectural decisions and someones' inability to write clean code is what you should blame instead.

    Share this comment


    Link to comment
    Share on other sites

    If one were to read the history of the OOP development concepts starting with Simula-67 (the original OOP language; Sweden 1967) it would be found that the designers' original intentions were to develop a way for superior organization for the code of an application.  The results contained several other benefits such as data encapsulation, inheritance, polymorphism, and code re-use.

    Problems arose when, like everything else in the Information Technology field, OOP was introduced to modern development with the release of Turbo Pascal 5.0.  Subsequently many developers began to promote the concepts of code re-use, data encapsulation, polymorphism, and inheritance without really understanding these concepts' limitations.  What happened then was the extreme hyping of OOP just like we now have with several current paradigms (ie: Agile) which in turn subsequently produced horribly designed application.  This was the result of market reinforcement for the use of all these concepts with remembering the simplest one of all, code organization.

    The first major project in New York City that incorporated OOP as a banking system, which was written up in one of the city's dailies.  The developers created a nightmare scenario with their inheritance design believing that they could simply create inheritance hierarchies with infinite levels.  The reality of the matter was that inheritance should never really go beyond around 3 hierarchical levels with avoidance of the use of the "protected" attribute for methods.

    Most failures with inheritance then were a result of misunderstanding the intents of the concept in the first place; like trying to apply it to any type of business requirement where it really wasn't needed.

    The same holds true for all development paradigms.

    As a result, I have to completely disagree with the author with his contentions about the use of OOP programming.  And if he has never developed a large, monolithic, mainframe application using standardized, procedural techniques (prior to OOP) than he would not understand the inherent advantages of using OOP simply to better organize one's code.

    No one has ever stated that to use OOP properly, one must use all of its concepts.  They are there for when they do make sense but it is the inherent capability to organize one's code that makes OOP development a superior paradigm to procedural based development endeavors.

    I believe the author should take a second look at OOP before writing such disparaging remarks about it...

     

     

     

     

     

    Share this comment


    Link to comment
    Share on other sites

    Essentially the classic way of programming is procedural just like C, you can consider a file (or a cluster of files linked together) as a module. This module might have a public API that exposes global variables and functions. Other functions that the programmer wishes to hide because they are very technical are marked as static so they are accessible only within the file of where they are declared.

    On the other hand in OOP imagine having such a module as described above, now call it a class, and have the ability to do lots sort of things with it. This is the so called flexibility / or portability. That instead of having your module stuck into place like a toolshelf in your warehouse, you would be able to have it like a toolbox and move it around and take it in different places with you.

     

    Share this comment


    Link to comment
    Share on other sites

    However all of the meaning of OOP is actually when is applied in a specific context, only when used within a design pattern. If your software entities form some sort of high level structures, you want to organize them in better terms and have the ability to control them dynamically.

    Share this comment


    Link to comment
    Share on other sites

    OOP and myself, yeah. Its a like and dislike kind of scenario.
    In 1998 (the saddest year i had so far in my life) i started out programming in borland delphi and was thrown into OOP from the very beginning. I had no one which teached me fundamentals and internet was still too expensive. The Delphi-Helpfile was the only thing which i learned from in the early days. But for some reason, i understood it from the very beginning, classes, inheritance, interfaces, static vs non-static, polymorphism, etc. So i was liking it from the very beginning. For decades i was coding in Delphi, also mixing in other languages like C++/Java, etc.

    But since i started doing and seeing more and more professional work in the non-game development field, i started to see problems of over-using OOP.
    There are so many people/experts out there, which abuses OOP to write the worst kind of software you can imagine -> barely working, exceptions everywhere, slow like hell, untestable, impossible to understand or to follow:
    - Classes which are not classes
    - Abstractions just for the sake of it
    - Extendability without a reason
    - Hiding everything just for the sake of it
    - Using delegates/callbacks everywhere
    - Using virtual functions for no reason
    - Overuse of inheritance
    - Misuse of polymorphism

    If they would write it with less OOP´ness, the software would still be garbage - but i could at least understand it.
    Unfortunatly this kind of shit, you will find all over the place - especially in expensive business software or in the java world.
    But the main problem is, that those "experts" teach other people. This results in more people writing poor code, which makes me very sad :-( Another problem i often see, is that third party libraries or frameworks may forces you to write bad OOP code, due to its bad api design.

    I am always surprised, how customers happiely use such software in production environments. Its like a miracle that those things work.

     

    But what makes me so angry, that you can actually write good software when you use the proper tools at the right time, but people somehow have forgotten that or simply doesent care.

     

    So the conclusion for me is:

    OOP is totally fine, when well and not over-used.
    If you easiely can follow the control flow of any kind of source, the chance are much higher that its well written - neitherless of its coding style.

    Share this comment


    Link to comment
    Share on other sites
    On 12/10/2018 at 12:07 PM, Aceticon said:

    Maybe the single biggest nastiest problem I used to find (possibly the most frequent also) was when the work of 3 or 4 "coding artists" over a couple of years had pilled up into a bug-ridden unmaintainable mess of mismatched coding techniques and software design approaches.

     It didn't really matter if the one, two or even all of those developers was trully gifted - once the next one started doing their "art" their own way on top of a different style (and then the next and the next and the next) the thing quickly became unreadable due to different naming conventions, full of weird bugs due to mismatched assumptions (say, things like one person returning NULL arrays as meaning NOT_FOUND but another doing it by returning zero-size arrays) and vastly harder to grasp and mantain due to the huge number of traps when trying make use of code from different authors with different rules.

     

    The "new" fashion in non-games architecture is "micro-services". In this approach you abstract everything. Even the compiler and the operating system. You get complete freedom of choice over you "art" style.

    The assumption is: You should never re-use code across teams. A certain programmer/team can re-use their own code. However when something goes wrong and someone has to fix it: You just throw everything away, and let the new programmer start from scratch.

    You do this by making sure that every little piece of code is completely encapsulated in it's own server. (it even get's compiled separately.)

    This has performance costs (because the APIs are usually needlessly network based).

    It has boilerplate development costs (because the APIs are usually needlessly network based).

    However... The joy of being able to fix a problem by ripping out someone else's code, and then using your favourite framework to solve the problem, is really enticing.

    After having worked in this style for the past several years, I don't know if I like it or not. However it is a very interesting philosophy when you work on a very large project. Also, I think that the recent improvement in Docker containers makes it very manageable if you do it right. That said, the performance costs probably make it unsustainable for game dev.

     

    Share this comment


    Link to comment
    Share on other sites
    1 hour ago, SillyCow said:

    The "new" fashion in non-games architecture is "micro-services". In this approach you abstract everything. Even the compiler and the operating system. You get complete freedom of choice over you "art" style.

    The assumption is: You should never re-use code across teams. A certain programmer/team can re-use their own code. However when something goes wrong and someone has to fix it: You just throw everything away, and let the new programmer start from scratch.

    You do this by making sure that every little piece of code is completely encapsulated in it's own server. (it even get's compiled separately.)

    This has performance costs (because the APIs are usually needlessly network based).

    It has boilerplate development costs (because the APIs are usually needlessly network based).

    However... The joy of being able to fix a problem by ripping out someone else's code, and then using your favourite framework to solve the problem, is really enticing.

    After having worked in this style for the past several years, I don't know if I like it or not. However it is a very interesting philosophy when you work on a very large project. Also, I think that the recent improvement in Docker containers makes it very manageable if you do it right. That said, the performance costs probably make it unsustainable for game dev.

     

    Whilst I have not worked in this style, I have designed systems architectures which made heavy use of segregating things into separate services (mostly to facilitate redundancy and scalability) and in my experience there is a significant cost associated with defining proper communications interfaces between services (aka APIs) and - maybe more importantly - changing them when changes of requirements result in changes to multiple "services".

    In fact, the part of the secret in designing high performance distributed systems was to find a good balance between decoupling and performance (both program performance and software development process performance) and always be aware of fake decoupling (i.e. when things look like decoupled, but they only work as long as certain assumptions - such as, say, no more than X elements are sent - are the same inside the code on all sides).

    The whole thing as you described it sounds as OO encapsulation but wrapped with a heavy layer that adds quite a lot of performance overhead and a whole new class of problems around things such as failure of request execution and API version mismatch (or even worse problems, if people decide to use networking between "services"), all the while seemingly not delivering anything of value (catering to programmer fashionism and prima-donna behaviours is not value, IMHO).

    Both in the literature and my experience, the best level to have service APIs at is as self-contained consistent business operations (i.e. ops which must be wholly executed or not executed at all), and I can only imagine how "interesting" things start getting with such high levels of service granularity as you seem to decribe when dealing with things such as Database Transactions.

     

    Edited by Aceticon

    Share this comment


    Link to comment
    Share on other sites
    4 hours ago, Finalspace said:

    OOP and myself, yeah. Its a like and dislike kind of scenario.
    In 1998 (the saddest year i had so far in my life) i started out programming in borland delphi and was thrown into OOP from the very beginning. I had no one which teached me fundamentals and internet was still too expensive. The Delphi-Helpfile was the only thing which i learned from in the early days. But for some reason, i understood it from the very beginning, classes, inheritance, interfaces, static vs non-static, polymorphism, etc. So i was liking it from the very beginning. For decades i was coding in Delphi, also mixing in other languages like C++/Java, etc.

    But since i started doing and seeing more and more professional work in the non-game development field, i started to see problems of over-using OOP.
    There are so many people/experts out there, which abuses OOP to write the worst kind of software you can imagine -> barely working, exceptions everywhere, slow like hell, untestable, impossible to understand or to follow:
    - Classes which are not classes
    - Abstractions just for the sake of it
    - Extendability without a reason
    - Hiding everything just for the sake of it
    - Using delegates/callbacks everywhere
    - Using virtual functions for no reason
    - Overuse of inheritance
    - Misuse of polymorphism

    If they would write it with less OOP´ness, the software would still be garbage - but i could at least understand it.
    Unfortunatly this kind of shit, you will find all over the place - especially in expensive business software or in the java world.
    But the main problem is, that those "experts" teach other people. This results in more people writing poor code, which makes me very sad :-( Another problem i often see, is that third party libraries or frameworks may forces you to write bad OOP code, due to its bad api design.

    I am always surprised, how customers happiely use such software in production environments. Its like a miracle that those things work.

     

    But what makes me so angry, that you can actually write good software when you use the proper tools at the right time, but people somehow have forgotten that or simply doesent care.

     

    So the conclusion for me is:

    OOP is totally fine, when well and not over-used.
    If you easiely can follow the control flow of any kind of source, the chance are much higher that its well written - neitherless of its coding style.

    It was my personal experience, whilst going through a similar learning process myself in similar conditions (at about the same time, though luckily I jumped into Java and discovered the Design Patterns book early), that there is a stage when one has learned some Software Design and starts overengineering everything, resulting in such as heavy mass of things that "seem like a good idea" and "just in case it's needed" that it effectively defeats the purpose of the whole OO philosophy.

    Eventually one starts doing things the KISS way, Refactoring code when the conditions that defined a design decision change, and designing software driven by a "what does this choice delivers and what does it cost now and later" thus producing much more maintainable deliverable functionality (what code delivers matters vastly more than The Code) and faster.

    Looking back, I would say this transition only properly happened to me about 10 years after I started working as a Software Developer.

    I reckon this is the point when one transits from Junior Software Designer to Experienced Software Designer. It takes time to get there and I suspect not that many make this transition.

    There is a similar thing around Software Development Processes, which can be observed in, for example, how so many groups use things like Agile in a recipe-like fashion (and often ditching the most important non-programming bits) rather than elements of it selected based on the tradeoffs of what they deliver to the process versus what they cost (not just money cost, but also things like time, bug rates, flexibility, learning rates, etc) in the context of a specific environment (i.e. the subset of Agile for use in a big non-IT corporation is not at all the same as that for use in an Indie Game House).

     

    PS. I think the bit you mentioned about the ignorant spreading their ignorance (and damn, I look at online tutorials and that shit is all over) is basically Dunning-Krugger in action, just like we see all over society and the Internet at the moment: people who have learned just enough to think they know a lot but not yet enough to understand just how much, much more they need to learn, are still very low in terms of knowledge in an area but at the peak of their self-confidence in terms of what they think they know, so they spread their ignorance around as if it's Wisdom, and do so with all the confidence of the trully ignorant.

    The Original Post in this article is a stunning example of just that, which is probably why all the Old-Seadog Programmers around here seem to have jumped on it.

    Edited by Aceticon

    Share this comment


    Link to comment
    Share on other sites

    OOP is all about interfaces. Interfaces are all about proper order.
    If you write your code in functional paradigm, you make order, but only once... Any change to a data structure or an interface breaks everything... Big programs are very hard and expensive to code in C.

    Share this comment


    Link to comment
    Share on other sites

    And to give my two cents also.

    If you look at the C++ standard library, and I believe here no one could tell it is a bad design, or bad OOP, since it has been designed by many C++ masters, we can see that:

    • use of many classes
    • use of many 'global' functions
    • most of classes use inheritance, even if one should not inherit from those classes
    • use of templates almost everywhere
    • use of namespace (few)
    • if we look for the virtual keyword in the public headers, there are, but not that much. This could be explained by the third point. About 547 virtual functions in about 60 classes. If we make a quick (and not really reliable) count for the keyword class, there are about 6000 classes. So there's about 1/10th of polymorphic classes (certainly more due to forward declarations, templates, comments...)
    • if one read about Scott Meyers books, and I also believe he's not a C++ duffer, he advises for example to create a class for non copyable objects. And all classes that don't have to be copyable should inherit from it (therefore all base classes of polymorphic hierarchies). This for sure adds a lot of inheritance and hierarchy. This also makes hierarchies more complex.
    • if you have a look at Gtkmm, which is a C++ wrapper of Gtk a GUI toolkit, they make over-use of classes, inheritance, polymorphism, templates and a mix of all of that. I also believe that Gtkmm doesn't do so just to use what C++ offers but which should not be used by nobody.
    • and we can go there for a long

    Teachers are responsible for the over-use of (bad) OOP. But can they do differently ? When you learn OOP, you learn that a dog is an animal and a cat is an animal, and all of them should make a hierarchy. You learn that all animals can move (except some) and so you need polymorphism. And you have few years of school learning programming, OOP, C++, C#, java, algorithms, C, functional programming... and many other things. After these 3 or 5 years you have to tell to companies you are an expert. So you need to have been covered with all these aspects. Also, we should not forget that schools now target the work market where 90% of CS graduated people will do 'business software' where deciding people only believe in what big companies offer them. So they will do classes with java, redo the same classes in java with another so-called new technologies made by java few years later. They will do windows and buttons, they will manage DB and receive and send packets over TCP-IP with java means. Well, all those 90% of students should be ready for this. And all those 90% will never have to manage memory, to do pointer arithmetics. All those 90% of people will have to follow what java tells them what to do, follow a design pattern that java believes this is the pattern to follow. They will not have to think about other aspects since java will do it for them in its black box.

    Also, I believe we can do bad OOP programming just as we can do bad imperative programming. See for example the Win32 library. They don't make the use of prefix to name their functions, their variables and so on. This results in a so much intrusive library that you cannot call your own function CreateWindow for example. If you have a look at many other C libraries, they always use prefixes to avoid name clashes. A good alternative to Win32 native library is the well know Qt. It is in C++, uses OOP (inheritance, polymorphism...). Since namespaces did not existed when Qt was created, they use prefixes (and still keep them...). But Qt obliges you to declare some weird statements in all your classes, which is also very intrusive.

    What I was wanting to tell in the previous paragraph is that bad programming is everywhere. Nothing is perfect. For simple functions, it is often very difficult to make it completely robust and reliable for any uses (see many stackoverflow topics for example. Sometimes you use something else which is poorly designed or poorly implemented because of companies policies. Sometimes it will be because the project is old and at the time of its creation nothing better was known or practicable. What we did in game programming 30 years ago is not the same at what was done 15 years ago, and is not the same to what we do now. And for sure it will be different in a dozen years.

    And to avoid bad programming now we have new design methods such as data-oriented programming or ECS and design patterns. Most of them will make the programmer focus on what is important to what they are meant for. But no more. They are not the absolute salvation and new problems will arise as soon as we reach another area of programming (ie. networking, parallelism, user interface, cloud computing, AI, cryptography...). But at least now we know that for such interactive and graphical programming we have means to avoid doing things badly.

    Share this comment


    Link to comment
    Share on other sites

    the way I see it, it's very easy to write horrible code in oop. in fact if you don't know what you're doing (which is usually the case nowadays) you will write horrible oop code by default. so yes its disadvantages outweigh it's advantages greatly.

    Share this comment


    Link to comment
    Share on other sites
    27 minutes ago, sirius707 said:

    the way I see it, it's very easy to write horrible code in oop. in fact if you don't know what you're doing (which is usually the case nowadays) you will write horrible oop code by default. so yes its disadvantages outweigh it's advantages greatly.

    It's easy to write horrible code. It's actually difficult to write horrible code that follows OOP. It's difficult to follow OOP though, it takes a lot of reading and experience. Here's the good thing though, all that work learning about OOP and learning how to follow the principles, makes you a better developer. Because OOP principles don't exist for no reason, they make development easier and more efficient. Not just for solo devs, but also when working on a team. Can't tell you how many times I have to clean up code from a fellow developer at work who doesn't follow OOP principles. Because it's easier and faster for me to refactor the code following OOP than to try to hack into the mess to try to add the required change without causing bugs and/or bad performance.

    So its advantages far outweigh the disadvantages.

    Share this comment


    Link to comment
    Share on other sites

    Someone else here mentioned the thrill of gutting out the previous Devs code and solving the problems with your own framework, basically a rewrite.

    I haven't worked for a game development company before in a professional capacity however I have and do work as software lead for several large commercial projects.

    In any large, established commercial software project, a complete rewrite is commercial and professional suicide, by rewriting you throw away years or even decades of development, bug fixes, enhancements and tweaks and it's simply a fallacy to think you can just come in, rewrite it from scratch and retain those many decades of development, you're basically back to square one.

    Here is a good example, one of many on Google: https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/

    It's pretty much an open source thing, "I can do much better, throw that old crap away, ooh new and shiny!"

    Every now and again we get some rep drop by saying he can improve our bespoke ERP with several decades of development by replacing it with solution X, we immediately know to show him the door without delay.

    Hope this helps someone else before they make such a fundamental mistake (note: don't confuse refactor with rewrite, they're completely different beasts and refactoring is often a great idea...)

    Share this comment


    Link to comment
    Share on other sites
    Xest

    Posted (edited)

     

    On 1/3/2019 at 10:16 PM, Brain said:

    In any large, established commercial software project, a complete rewrite is commercial and professional suicide, by rewriting you throw away years or even decades of development, bug fixes, enhancements and tweaks and it's simply a fallacy to think you can just come in, rewrite it from scratch and retain those many decades of development, you're basically back to square one.

    Sorry Brain, but I fundamentally disagree. I have done exactly this and it's been nothing but successful both for the project and my career (I also don't work in game development, I work for a large multi-national financial corporation). It's not easy, and it's not something that should be done on a whim, but to suggest it shouldn't ever be done and will always end badly is plain stupid, and is precisely why so many users are stuck, unhappily having to use crap software in so many circumstances.

    I have a hard time believing many projects that have been around for decades has had anything other than positive enhancements and has consistently avoided technical debt given the very concept of technical debt, and the will to tackle it has only really come to the for over the last decade. The fact legacy projects have decades of cruft in them is precisely why you often replace them, high levels of technical debt creating high maintenance cost, performance bottlenecks, security issues, and so forth are typically rife in legacy software precisely because they were built before we knew how to make better software. If nothing else, I've yet to see a proprietary project over 15 years old that isn't completely and utterly awful, and I've seen many.

    It's only really now with the growth of devops and commonplace automation of builds, testing, and application of quality gates through tools such as Sonar and Fortify that we're really beginning to start building software that we can make sure stays high quality. Sometimes square one is exactly where you want to be, on a new, fair board, when the game is snakes and ladders, and the old board was rigged with snakes on every square from 2 to 100. Sometimes you just need to take a step back and ask your users what they're actually trying to do, what they actually want, rather than what they've been forced to do inefficiently using a variety of hacks and workarounds using legacy stuff that they've come to assume is the only option because no one ever showed them they can have something better.

    As an example over the last year we had a load of clients (including tier 1 banks) using legacy versions of our software, and that of our competitors approach us because they needed to achieve GDPR compliance. The reality is we could never have done so with the existing software, the lack of functionality around auditing, security, and tracking of data through the system meant that to embed that functionality into the existing version would've taken about 1.5x as long as just starting from scratch did. Sure, starting from scratch meant we lost some obscure functionality that no one understood, but that ended up being a good thing because upon examination that functionality only existed and was used as a fudge to get around deficiencies in the software in the first place, and thanks to re-writing it and doing it properly we could solve their actual problem, rather than have them need to rely on a shitty undocumented hack.

    I'm not saying rewrites are always the right thing, and that they always turn out better, god only knows I sympathise with your comment on people who go "Ooh, shiny!", but if you have talented staff, who know how to build good software, you can produce a sensible high level plan with staged releases that will show continuous progress, and a clear statement about what you're trying to achieve rewriting from scratch, and that as such you have management buy in because you've managed to sell it because it has tangible benefits, then why wouldn't you do it? What I am saying is that saying rewrites are never the right thing is as utterly stupid as saying rewrites are always the right thing.

    Your first pass on a piece of software will never be your best pass, you'll always do it better the second time around. The same is true of rewriting someone elses software if you've worked on it sufficiently. If you've not got talented devs I understand where you're coming from, but any tech lead should be able to shepherd a team through a succesful rewrite of a piece of software they're responsible for, and if they can't then they've no business being a tech lead, it's part and parcel of the job to be able to find the cheapest options both short and long term for looking after a piece of software, and if long term it's to replace it, which sometimes it will be, then they should be able to do that.

    The idea that those who came before are beings of legend whom no one could ever best in future is nonsense, in fact, all too often those that came before didn't even understand OOP because it was still young and so churned out low quality unmaintainable dross instead, much like the author of this article in fact.

    Edited by Xest

    Share this comment


    Link to comment
    Share on other sites
    9 hours ago, Xest said:

    I have a hard time believing many projects that have been around for decades has had anything other than positive enhancements and has consistently avoided technical debt given the very concept of technical debt, and the will to tackle it has only really come to the for over the last decade. The fact legacy projects have decades of cruft in them is precisely why you often replace them, high levels of technical debt creating high maintenance cost, performance bottlenecks, security issues, and so forth are typically rife in legacy software precisely because they were built before we knew how to make better software. If nothing else, I've yet to see a proprietary project over 15 years old that isn't completely and utterly awful, and I've seen many.

    I maintain one right now that is over ten years old and is still maintainable, neat and tidy.

    The thing to aim for isn't to completely throw away and rewrite from scratch, as in the example i posted, but to treat development like painting the forth rail bridge. It should be rewritten a subsection at a time by careful refactoring, with each refactor considered and ensuring that each component is properly isolated and uses proper object oriented design.

    Methodologies such as agile can and do encourage such refactoring but only if time is put aside to do this as the default of agile is to only accept user stories and bugfixes, so features just pile in, repeatedly, along with their bugs.

    By the time you reach the last component and have signed it off, and everything is "rewritten" (read: nicely refactored) you can start again.

    There are even ways to completely change the paradigm of the program, for example switching from a really simple program where design and layout are badly merged together with business logic, to one that uses for example an MVC design.

    I can't confirm that games do this as they're generally more 'disposable', however its plain to see the source code these days for commercial engines (engines having more longevity than the games that are created in them) such as unreal engine and the source for them has been refactored in this way coming all the way from UE3 to the current UE4, without any complete rewrite. You can even run diffs against the code to still find some remaining ancient code, the adage being "if it ain't broke, don't fix it".

     

    Share this comment


    Link to comment
    Share on other sites
    16 hours ago, Brain said:
    On 1/3/2019 at 10:16 PM, Brain said:

    In any large, established commercial software project, a complete rewrite is commercial and professional suicide, by rewriting you throw away years or even decades of development, bug fixes, enhancements and tweaks and it's simply a fallacy to think you can just come in, rewrite it from scratch and retain those many decades of development, you're basically back to square one.

    ---

    It's pretty much an open source thing, "I can do much better, throw that old crap away, ooh new and shiny!"

    The thing to aim for isn't to completely throw away and rewrite from scratch, as in the example i posted, but to treat development like painting the forth rail bridge. It should be rewritten a subsection at a time by careful refactoring, with each refactor considered and ensuring that each component is properly isolated and uses proper object oriented design.

    I've worked as a contractor for almost 2 decades in a couple of industries with maybe 15-20 different companies and have seen both situations were a complete rewrite was the chosen solution and others were continuous refactoring was it.

    I've also been brought in far too many times as the external (expensive) senior guy to fix the mess the software has turned into.

    It really depends of the situation: continuous refactoring is the best option in my opinion if done from early one and with few interruptions, though it requires at one or two people who know what they're doing rather than just the typical mid-level coders.

    However, once a couple of significant requirement changes come trough and are hacked into a code base and/or a couple of people were responsible for the software, each thinking they know best and doing code their way mismatched from the ways already used in the code, the technical debt becomes so large that any additional requirement takes ages to implement. When that happens, that software has often reached a point were a full rewrite is a more viable solution than trying to live with it for the meanwhile whilst trying to refactor it into something maintainable. This is much more so if the software is frequently updated with new requirements.

    My gut feeling is that where the balance lies depends on the business environment where that software is used is one generating frequent requirement changes or not - in environments where there is a near constant stream of new requirements it's pretty much impossible to do any refactoring of large important blocks, since any urgent new requirements that come are likely to impact that code and have time-constrainsts which are incompatible with the refactoring (as you can't really refactor and make new code at the same time in the same code area).

    That said, maybe half the full rewrites I worked in or seen done turned out to be very messy affairs all around, mostly because good Business Analysts and Technical Analysts are as rare as hen's teeth so the software that ended up made didn't actually implemented the same user requirements as the old software.

    Edited by Aceticon

    Share this comment


    Link to comment
    Share on other sites

    I'm not sure if your approach is really that much better especially in terms of maintainability and robustness. When you talk about data-centric programming then immediately (pure) functional programming comes into mind, most prominently represented by Haskell. Interestingly there exists actual high quality research on using Haskell for game programming - you might check out the following links:

    https://dl.acm.org/citation.cfm?id=871897
    https://dl.acm.org/citation.cfm?id=2643160
    https://dl.acm.org/citation.cfm?id=3110246
    https://dl.acm.org/citation.cfm?id=3122944
    https://dl.acm.org/citation.cfm?id=3122957
    https://dl.acm.org/citation.cfm?id=2976010
    A quake3 clone in Haskell: https://wiki.haskell.org/Frag

    Oh and I think you might be very interested in Ted Kaminski's blog: https://www.tedinski.com/

     

    Share this comment


    Link to comment
    Share on other sites
    On 12/10/2018 at 6:00 AM, Hodgman said:
    • Cross-cutting concerns - if the data was designed properly, then cross-cutting concerns aren't an issue. Also, the argument about where a function should be placed is more valid in languages like Java or C# which force everything into an object, but not in C++ where the use of free-functions is actually considered best practice (even in OO designs)

    Nitpick about C#: You are not forced to have everything in an object. You can have static classes and say "using static" if you like, so from a usage standpoint, there's hardly a difference between that and putting free functions in namespaces.

    Share this comment


    Link to comment
    Share on other sites

    I understand both sides of the discussion. OOP is good at keeping code organized and maintainable however it does introduce a heap of complexity that makes it difficult for anyone but the authors to understand it well enough to make good use of it. But the same applies with or without OOP. The real problem is the text. You can't simply glance at source code and see the overall structure of anything but a "Hello world!" project. You have to examine the files in detail, memorize a whole bunch of long complicated names, which could take you the rest of your life. The core principles of OOP would work better in a general purpose VPL.

    Share this comment


    Link to comment
    Share on other sites

    As a learning programmer, still in uni, I am also starting to realise how difficult OOP is to deal with. Java was taught to me as a first year coding language and I have been playing around with it a lot. I'm grateful for how readable Java developers make their code, it makes learning other people's code quite digestible. But the way code has to be structured into a deep hierarchy makes it really difficult to use. Last week I stumbled upon an experience where in order to implement a feature I had to use reflection just to expose a variable in an API that was set to private simply because of the whole black box idealogy, along with a hacky subclass to adjust the behaviour of a method that was using that variable. This doesn't feel right at all, but it was the only solution. I submitted as an issue to the API's github and the author went ahead and implemented the feature, as I didn't feel like forking the project just to make one thing public.

    A lot of Java blogs tell us how you should hide instance variables behind getter and setter methods for encapsulation, but it just feels so cumbersome. Having a lot of getters and setters tends to hint to the fact that the variable should really belong in another class... or not. It really depends on what the variable is for right? If it's basically a database class then it's natural, but if it's an object that does things it tends to feel a bit unnatural in use. Also, don't setters break the concept of immutability and give the class less control of its own behaviour? But without them some things just wouldn't work, like how would you make a clock class without a setTime() somewhere? I know these arguments have been said many times before but there should really be some concrete definition of at what point or what level these features belong.

    I had a few different classes in my game and tended to get more and more difficult adding in new features when I don't know where the variable should go. I could put it in their proper sensible class, but then I have to telescope it's reference along a chain of other classes and ends up getting very coupled with other classes. This means whenever I wanted to add a feature I had to refractor the whole codebase, which was getting really annoying at one point because for one new feature I kept breaking 20 other features. At some point I ended up mimicking the MVC pattern. It feels more natural and sensible having a class handle data that's serialisable (with basically public fields), and a class monitoring that data and presenting view level representations of that data. For example, if I have a grid of game entities, I can store the entities in an array in one class, make modifications and send events in an update method, and let my view level classes determine what the player sees based on what happened. It just feels so much less messy that way than coupling the behaviour to the visual representation of the events.

    To be specific, i'm using libGDX as my game development library. libGDX has a library called Scene2D, which is a 2D actor graph library. When poking around forums and discord, I noticed people really disliked it for anything except UI building with it's UI sub-library, despite its vast amount of features ,and usefulness in other areas being a general 2D actor graph library. One of it's biggest disadvantages is that serialisation is not naturally supported, so saving game's state to file becomes really confusing. However, I feel like it's more manageable when I have a backing class that stores all my entities as data structs, and creating actors from the data+handling behaviour dynamically.

    Please let me know if my thoughts make sense, I am not the most experienced developer in the world for sure as I'm still learning things (I tend to be really ambitious and don't like making clones.. good thing I didn't start off with an mmorpg though 😛).

    Edited by Draika the Dragon

    Share this comment


    Link to comment
    Share on other sites

    Coders without a computer science degree will often start with a class hierarchy without considering the problem or if they need to store any data to begin with. They hit a wall from isolating their code too much before they know what to encapsulate and for what purpose, introduce a huge generic graph system that floods the instruction cache, fragments the data cache, stalls the memory bus, interrupts the execution window, trashes the heap with tiny fixed-size allocations, leaks memory from cycles and crashes with null pointer exceptions. Then I point out that all they need to make all that is a tiny global function with a traditional algorithm and pre-existing data structures, often possible to implement using a few SIMD assembly intrinsics and multi-threading for a 200X performance boost.

    Share this comment


    Link to comment
    Share on other sites

    Rule of thumb: initially, write all your classes as unrelated, even if they share a common function or two. Only introduce OOP relationships where and when it's clearly necessary. For example, if 4 out of 10 functions in the two classes are the same, that warrants the creation of a common ancestor.

    Share this comment


    Link to comment
    Share on other sites

    I think it might be helpful to distinguish between Java / C# and C++. Objects in Java / C# means smart-objects which is an oxymoron. In Java Object orientated programming would be better called subject orientated programming.

    I played around a bit with C++ while trying to learn DirectX 11 at the same time. It was a bit of a steep learning curve. Anyway I abandoned that and got into C#. I soon started to fall in love with strongly typed code. It was amazing 50% of the time when my code compiled, it actually did what I wanted it to. My experience of C and C++, I did some C and Pascal way back, was that getting the code to compile was only, the beginning of a long painful journey. With C#I could rip my code apart, restructure and rewire (in other words refactor) and quickly get it to work again. I would even get the weird bug occasionally, where the code was doing what I wanted it to do, but I didn't know why. Its quite fun to try and track down why your code is working when it shouldn't be.

    But I was also frustrated by the lack of multiple inheritance and other short comings. C# was designed to be better than Java, but not too much better, as to become a threat to the C++ native Windows platform. So as soon as I came across Scala, with its quasi multiple inheritance it was good riddance to Microsoft, goodbye and thanks for all the fish.

    So from what I can make, from very limited knowledge, is that the problem with C++, was Bjarne's' "Not one CPU cycle left behind!" (relative to C). This was a great marketing slogan but a complete disaster in high level language design. You don't need 100% runtime efficiency, 80% is good enough and allows for huge beneficial trade off's in compile time safety, run time safety, compile times, reduced compiler bugs, the quality of tooling, ease of learning etc. And so the problem with smart object in c++ is that they are not very smart and can't even copy themselves properly.

    So I see it as a choice, or rather a balance, between smart objects and dumb data. Java's "Everything is a (smart)" object is dumb. Unfortunately Scala some what doubled down on this, but is now sensibly looking to back track. Silent boxing leads to criminal and unnecessary inefficiency. An Integer is dumb data data. An integer doesn't know, how to produce a string, convert itself to a double. An Int doesn't even know how to add itself to another Int. An requires operations to be applied to it. It has no methods. It has no identity. So to avoid boxing we must know the narrow type at compile time. However syntactically we can still write it as if these were methods.

    5.toString

    myInt.toString

    So in Scala, there is usually a choice between trait / class based inheritance or type classes. Between a smart object that carries its methods around it with it to be dynamically dispatched at run time, or dumb data where operations must be applied at compile time. But the good thing is that you can still use type classes with smart objects, with objects that inherit from AnyRef. But also the type class instances that perform the operations on the different data types can themselves inherit.

    Edited by Rich Brighton

    Share this comment


    Link to comment
    Share on other sites



    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!