Jump to content
Site Stability Read more... ×
  • Advertisement
  • 12/09/18 10:16 PM

    The Faster You Unlearn OOP, The Better For You And Your Software

    General and Gameplay Programming

    GameDev.net
    Quote

    Object-oriented programming is an exceptionally bad idea which could only have originated in California.
      - Edsger W. Dijkstra

     

     

    Maybe it's just my experience, but Object-Oriented Programming seems like a default, most common paradigm of software engineering. The one typically thought to students, featured in online material and for some reason, spontaneously applied even by people that didn't intend it.

    I know how succumbing it is, and how great of an idea it seems on the surface. It took me years to break its spell, and understand clearly how horrible it is and why. Because of this perspective, I have a strong belief that it's important that people understand what is wrong with OOP, and what they should do instead.

    Many people discussed problems with OOP before, and I will provide a list of my favorite articles and videos at the end of this post. Before that, I'd like to give it my own take.

     

    Data is more important than code

    At its core, all software is about manipulating data to achieve a certain goal. The goal determines how the data should be structured, and the structure of the data determines what code is necessary.

    This part is very important, so I will repeat.

    Quote

    goal -> data architecture -> code

    One must never change the order here! When designing a piece of software, always start with figuring out what do you want to achieve, then at least roughly think about data architecture: data structures and infrastructure you need to efficiently achieve it. Only then write your code to work in such architecture. If with time the goal changes, alter the architecture, then change your code.

    In my experience, the biggest problem with OOP is that encourages ignoring the data model architecture and applying a mindless pattern of storing everything in objects, promising some vague benefits. If it looks like a candidate for a class, it goes into a class. Do I have a Customer? It goes into class Customer. Do I have a rendering context? It goes into class RenderingContext.

    Instead of building a good data architecture, the developer attention is moved toward inventing “good” classes, relations between them, taxonomies, inheritance hierarchies and so on. Not only is this a useless effort. It's actually deeply harmful.

     

    Encouraging complexity

    When explicitly designing a data architecture, the result is typically a minimum viable set of data structures that support the goal of our software. When thinking in terms of abstract classes and objects there is no upper bound to how grandiose and complex can our abstractions be. Just look at FizzBuzz Enterprise Edition  – the reason why such a simple problem can be implemented in so many lines of code, is because in OOP there's always a room for more abstractions.

    OOP apologists will respond that it's a matter of developer skill, to keep abstractions in check. Maybe. But in practice, OOP programs tend to only grow and never shrink because OOP encourages it.

     

    Graphs everywhere

    Because OOP requires scattering everything across many, many tiny encapsulated objects, the number of references to these objects explodes as well. OOP requires passing long lists of arguments everywhere or holding references to related objects directly to shortcut it.

    Your class Customer has a reference to class Order and vice versa. class OrderManager holds references to all Orders, and thus indirectly to Customer's. Everything tends to point to everything else because as time passes, there are more and more places in the code that require referring to a related object.

    Quote

    Instead of a well-designed data store, OOP projects tend to look like a huge spaghetti graph of objects pointing at each other and methods taking long argument lists. When you start to design Context objects just to cut on the number of arguments passed around, you know you're writing real OOP Enterprise-level software.

     

    Cross-cutting concerns

    The vast majority of essential code is not operating on just one object – it is actually implementing cross-cutting concerns. Example: when class Player hits() a class Monster, where exactly do we modify data? Monster's hp has to decrease by Player's attackPower, Player's xps increase by Monster's level if Monster got killed. Does it happen in Player.hits(Monster m) or Monster.isHitBy(Player p). What if there's a class Weapon involved? Do we pass it as an argument to isHitBy or does Player has a currentWeapon() getter?

    This oversimplified example with just 3 interacting classes is already becoming a typical OOP nightmare. A simple data transformation becomes a bunch of awkward, intertwined methods that call each other for no reason other than OOP dogma of encapsulation. Adding a bit of inheritance to the mix gets us a nice example of what stereotypical “Enterprise” software is about.

     

    Object encapsulation is schizophrenic

    Let's look at the definition of Encapsulation:

    Quote

    Encapsulation is an object-oriented programming concept that binds together the data and functions that manipulate the data, and that keeps both safe from outside interference and misuse. Data encapsulation led to the important OOP concept of data hiding.

    The sentiment is good, but in practice, encapsulation on a granularity of an object or a class often leads to code trying to separate everything from everything else (from itself). It generates tons of boilerplate: getters, setters, multiple constructors, odd methods, all trying to protect from mistakes that are unlikely to happen, on a scale too small to mater. The metaphor that I give is putting a padlock on your left pocket, to make sure your right hand can't take anything from it.

    Don't get me wrong – enforcing constraints, especially on ADTs is usually a great idea. But in OOP with all the inter-referencing of objects, encapsulation often doesn't achieve anything useful, and it's hard to address the constraints spanning across many classes.

    In my opinion classes and objects are just too granular, and the right place to focus on the isolation, APIs etc. are “modules”/“components”/“libraries” boundaries. And in my experience, OOP (Java/Scala) codebases are usually the ones in which no modules/libraries are employed. Developers focus on putting boundaries around each class, without much thought which groups of classes form together a standalone, reusable, consistent logical unit.

     

    There are multiple ways to look at the same data

    OOP requires an inflexible data organization: splitting it into many logical objects, which defines a data architecture: graph of objects with associated behavior (methods). However, it's often useful to have multiple ways of logically expressing data manipulations.

    If program data is stored e.g. in a tabular, data-oriented form, it's possible to have two or more modules each operating on the same data structure, but in a different way. If the data is split into objects with methods it's no longer possible.

    That's also the main reason for Object-relational impedance mismatch. While relational data architecture might not always be the best one, it is typically flexible enough to be able to operate on the data in many different ways, using different paradigms. However, the rigidness of OOP data organization causes incompatibility with any other data architecture.

     

    Bad performance

    Combination of data scattered between many small objects, heavy use of indirection and pointers and lack of right data architecture in the first place leads to poor runtime performance. Nuff said.

     

    What to do instead?

    I don't think there's a silver bullet, so I'm going to just describe how it tends to work in my code nowadays.

    First, the data-consideration goes first. I analyze what is going to be the input and the outputs, their format, volume. How should the data be stored at runtime, and how persisted: what operations will have to be supported, how fast (throughput, latencies) etc.

    Typically the design is something close to a database for the data that has any significant volume. That is: there will be some object like a DataStore with an API exposing all the necessary operations for querying and storing the data. The data itself will be in form of an ADT/PoD structures, and any references between the data records will be of a form of an ID (number, uuid, or a deterministic hash). Under the hood, it typically closely resembles or actually is backed by a relational database: Vectors or HashMaps storing bulk of the data by Index or ID, some other ones for “indices” that are required for fast lookup and so on. Other data structures like LRU caches etc. are also placed there.

    The bulk of actual program logic takes a reference to such DataStores, and performs the necessary operations on them. For concurrency and multi-threading, I typically glue different logical components via message passing, actor-style.  Example of an actor: stdin reader, input data processor, trust manager, game state, etc. Such “actors” can be implemented as thread-pools, elements of pipelines etc. When required, they can have their own DataStore or share one with other “actors”.

    Such architecture gives me nice testing points: DataStores can have multiple implementations via polymorphism, and actors communicating via messages can be instantiated separately and driven through test sequence of messages.

    The main point is: just because my software operates in a domain with concepts of eg. Customers and Orders, doesn't mean there is any Customer class, with methods associated with it. Quite the opposite: the Customer concept is just a bunch of data in a tabular form in one or more DataStores, and “business logic” code manipulates the data directly.

     

    Follow-up read

    As many things in software engineering critique of OOP is not a simple matter. I might have failed at clearly articulating my views and/or convincing you. If you're still interested, here are some links for you:

     

    Feedback

    I've been receiving comments and more links, so I'm putting them here:

     

    Note: This article was originally published on the author's blog, and is republished here with kind permission.



      Report Article


    User Feedback




    It is genuinely interesting that people who don't know how to use OO for the reasons OO exists (reduce the likelyhood of bugs, reduce the amount of information that must be communicated between developers and control complexity by reducing cross-dependencies so that very large projects can be done efficiently, to pick just a few examples) put up their own deeply flawed pseudo-OO strawman as an example "OO" and then proceed to argue that their imaginary construct shows how shit OO is and why people should stop doing it.

    Even more funny is that this is basically a back-to-spaghetti-code movement that reverses what happened 25 years ago, when people figured out that making everything have access to everything and be able to change everything was spectacularly bad from the point of view of making code that has few bugs and can be maintained and extended.

    It seems to be a sadly common thing in the Game Development branch of IT that people who have very little knowledge of how to architect large scale solutions and how to make effective and efficient software development processes, like to, from their peak certainty (and ignorance) spot in the Dunning-Krugger curve, opinionate about software architecture concerns without even a mention of things like development process efficiency in aggregate (not just coding speed, something which is the least important part of it), inter and intra-team dependencies, flexibility for maintenability and extendability, bug reduction and bug discovery and removal efficiency.

    Maybe it's something to do with so many developers in the Industry not having to maintain their own code (game shipped = crap code and design problems solved) and being in average more junior than the rest of the industry so less likely to have seen enough projects in enough different situations to have grown beyond being just coders and to awareness of technical design and architectural concerns in the software development process?

    I'm a little sad and a little angry that people who have not demonstrated much in the way of wisdom in terms of software development processes, are trying to undo decades of hard learned lessons without even understand why those things are there, a bit like saying "I never had a car accident and don't like wearing a seatbelt, so I want to convince everybody else not to wear seatbelts".

    Share this comment


    Link to comment
    Share on other sites
    7 hours ago, Aceticon said:

    It is genuinely interesting that people who don't know how to use OO for the reasons OO exists (reduce the likelyhood of bugs, reduce the amount of information that must be communicated between developers and control complexity by reducing cross-dependencies so that very large projects can be done efficiently, to pick just a few examples) put up their own deeply flawed pseudo-OO strawman as an example "OO" and then proceed to argue that their imaginary construct shows how shit OO is and why people should stop doing it.

    20 hours ago, GameDev.net said:

     @Hodgman recently published a piece outlining his counter-arguments to typical objections to OOP:

    Yeah, you can almost re-frame this article as a checklist of signs that you're doing OOP wrong :D 

    My quick feedback / comments on it:

    • Data is more important than code - yep, people often write stupid class structures without considering the data first. That's a flaw in their execution, not the tools they have at hand. Once the data model is well designed, OO tools can be used to ensure that the program invariants are kept in check and that the code is maintainable at scale.
    • Encouraging complexity - yep, "enterprise software" written by 100 interns is shitty. KISS is life. One of the strengths of OO if done right is managing complexity and allowing software to continue to be maintainable. The typical "enterprise software" crap is simply failing at using the theory.
    • Bad performance - As above, if you structure your data first, and then use OO to do the things it's meant to do (enforce data model invariants, decouple the large-scale architecture, etc)... then this just isn't true. If you make everything an object, just because, and write crap with no structure, then yes, you get bad performance. You often see Pitfalls of OOP cited in this area, but IMHO it's actually a great tutorial on how you should be implementing your badly written OO code :D 
    • Graphs everywhere - this has nothing to do with OO. You can have the same object relations in OO, procedural or relational data models. The actual optimal data model is probably exactly the same in all three paradigms... While we're here though, handles are the better pointers, and that applies to OO coders too.
    • Cross-cutting concerns - if the data was designed properly, then cross-cutting concerns aren't an issue. Also, the argument about where a function should be placed is more valid in languages like Java or C# which force everything into an object, but not in C++ where the use of free-functions is actually considered best practice (even in OO designs). OO is an extension of procedural programming after all, so there's no conflict with continuing to use procedures that don't belong to a single class of object.
    • Object encapsulation is schizophrenic - this whole things smacks of incorrect usages. Getters and setters are a code smell -- they exist when there's encapsulation but zero abstraction. There's no conflict of using plain-old-data structures with public, primitive type members, in an OO program -- it's actually a common solution when employing OO's DIP rule. A simple data structure can be a common interface between modules! If you're creating encapsulation at the wrong level, then just don't create encapsulation at that level... This section is honestly an argument against enterprise zombies who dogmatically apply the methods what their school taught them without any original thought of their own.
    • There are multiple ways to look at the same data - IMHO it's common for an underlying data model to be tabular as in the relational style, with multiple different OO 'views' of that data, exposing it to different modules, with different restrictions, for different purposes, with zero copies/overhead. So, this section is false in my experience.
    • What to do instead? - Learn what OO is actually meant to solve / is actually good at, and use it sparingly for those purposes only :)

     

    Share this comment


    Link to comment
    Share on other sites

    It's strange to see the author saying "OOP is bad and you should unlearn it" and then in the "what should you do instead" section of the article encounter words like "object" and "polymorphism".

    Edited by eugene2k

    Share this comment


    Link to comment
    Share on other sites

    Peoples' brains work in different ways, even when they're solving the same problem. The most important thing is that one's code is logical, clear, consistent and well documented.

    Programming is like creating art. When you are comfortable, confident and efficient with your technique, it becomes an expression of yourself.

    Edited by Guy Fleegman

    Share this comment


    Link to comment
    Share on other sites
    1 hour ago, eugene2k said:

    It's strange to see the author saying "OOP is bad and you should unlearn it" and then in the "what should you do instead" section of the article encounter words like "object" and "polymorphism".

    To be fair, you can use those things and not actually be following OOP principles.  Likewise, you can still write OO code in languages (such as C) which do not offer those facilities.

    Share this comment


    Link to comment
    Share on other sites

    "Data is more important than code"

    Here we have the core of most of the anti-OOP nonsense that seems to be the popular thing these days.   Looks like someone watched a YouTube video about data-oriented programming and now they know the truth that everyone else is clearly missing, so they must go out and spread the good word.

    Sorry, but, that's a load of b.s.   In software there are many aspects that come together.  The programmer, the user, the code, the data, the development tools, the target hardware, etc.   None of those things are objectively the most "important" thing, and certainly not so for each and every piece of software to ever be written.  

    OOP is just a tool, and shockingly one that can be used with other tools.  You can use that hammer AND a screw driver, you dont have to pick one over the other.   OOP has its strengths and benefits, which is why it has become one of the most popular programming paradigms in history.  It helps programmers think about solutions to problems in natural ways that are easy to think about.  It helps them to write maintainable code.  And the list goes on.  Now, can you abuse it and write terrible OOP code?  Yeah, sure.  Can you also write terrible data-oriented code?  Oh...yeah.

    What you need to do is stop thinking in dogmatic ways, and just use the tools that best suit you and the problem you're trying to solve.  There is no "right" or "wrong" way to solve a problem in software engineering.  The best way is the one that works for you.  Of course that doesnt mean that all solutions are equally good.  But you cant figure out what's going to be that good or better solution by just making blanket statements about this thing being the most important, or that thing being it.   Use your brain, look at the problem, decide what's the best approach to solve it, the one that makes most sense to you.  That approach might be the best fit for you, and the wrong one for someone else.   There's no contradiction there.

    Share this comment


    Link to comment
    Share on other sites
    2 hours ago, Guy Fleegman said:

    Peoples' brains work in different ways, even when they're solving the same problem. The most important thing is that one's code is logical, clear, consistent and well documented.

    Programming is like creating art. When you are comfortable, confident and efficient with your technique, it becomes an expression of yourself.

    That's fine when you work alone.  Collaborating is a different story though.

    Share this comment


    Link to comment
    Share on other sites
    11 hours ago, Aceticon said:

    salient points

    Amen.  Didn't see one mention of the main cost of development: maintainabiity.

    Share this comment


    Link to comment
    Share on other sites
    2 hours ago, jbadams said:

    To be fair, you can use those things and not actually be following OOP principles.  Likewise, you can still write OO code in languages (such as C) which do not offer those facilities.

    You can also use those things, not follow the actual OOP principles and then complain that OOP is bad ;) Or you could use the actual OOP principles but still engineer the system in a way that doesn't actually mirror its use and then complain that OOP doesn't work. I think that's what happened to the author.

    Share this comment


    Link to comment
    Share on other sites
    3 hours ago, Guy Fleegman said:

    Peoples' brains work in different ways, even when they're solving the same problem. The most important thing is that one's code is logical, clear, consistent and well documented.

    Programming is like creating art. When you are comfortable, confident and efficient with your technique, it becomes an expression of yourself.

    I've worked about 14 years as a freelancer (contractor) Software Developer in a couple of Industries and far too often I was called in to fix code bases which had grown to be near unmaintainable.

    Maybe the single biggest nastiest problem I used to find (possibly the most frequent also) was when the work of 3 or 4 "coding artists" over a couple of years had pilled up into a bug-ridden unmaintainable mess of mismatched coding techniques and software design approaches.

    It didn't really matter if the one, two or even all of those developers was trully gifted - once the next one started doing their "art" their own way on top of a different style (and then the next and the next and the next) the thing quickly became unreadable due to different naming conventions, full of weird bugs due to mismatched assumptions (say, things like one person returning NULL arrays as meaning NOT_FOUND but another doing it by returning zero-size arrays) and vastly harder to grasp and mantain due to the huge number of traps when trying make use of code from different authors with different rules.

    We're not Artists, at best we're a mix of Craftsman and Engineer - yes, there's room for flair, as long as one still considers the people one works with, those who will pick up our code later or even ourselves in 6 months or 1 years' time when we pick up our own code and go "Oh, shit, I forgot how I did this!".

    Unsurprisingly it has been my experience that as soon one moves beyond little projects and into anything major, a team of average but trully cooperating developers will outdeliver a team of prima-donnas every day or the week in productivity and quality.

    (And I say this as having been one such "artist" and "prima-donna" earlier in my career)

    Share this comment


    Link to comment
    Share on other sites

    Isn't core idea behind OOP that human can understand and remember object based stuff way better than anything else like symbols for instance? (in general).

    I'd say it's fine unless you are stacking similar instances en mass without any logical reason and without well crafted procedures and transformations it's pretty obvious that at some point it will get too complex to handle.

    Edited by RandNR

    Share this comment


    Link to comment
    Share on other sites
    Damn, good luck trying to take down OOP. There are plenty of bad examples of everything, but the tough thing to do is to take the principles as they are meant to be followed and address those directly. In this way you're attacking not just the best examples, you are addressing the ideal. Addressing only the bad examples or if we're being generous, what you find to be the common examples, is like playing your ideal team against the bench warmers and injured players of the other. Which is what it looks like you're doing here.

    If you follow SOLID, GRASP and the very common KISS principle, then none of the problems you listed as inherent to OOP (I think you are incorrect in doing so), are problems for OOP. I recommend following principles to about 90-95% because that's about the peak balance between development time and the benefits of the principle.

    Share this comment


    Link to comment
    Share on other sites

    So, as a hobbyist, with minimal formal training in C++ or object oriented programming, how relevant is this stuff to me? Feels like just a lot of discussion waaaaay above my head rn :).

    Share this comment


    Link to comment
    Share on other sites
    10 minutes ago, JWColeman said:

    So, as a hobbyist, with minimal formal training in C++ or object oriented programming, how relevant is this stuff to me? Feels like just a lot of discussion waaaaay above my head rn :).

    It's all relevant if you're programming. These principles aren't some way to brow beat people into line or for gate keeping (though there are people that do do things like that with them), the OOP principles really do help you to efficiently write code that is easier to maintain and plays well with others.

    No one starts off with this stuff already, it takes time and experience to get things going well. As a hobbyist, read up on it, take it in a little at a time. Every once in a while (like every year or two), review the principles again. Each time you do you will learn a little more because you'll be a little higher up the mountain.

    Share this comment


    Link to comment
    Share on other sites

    Dangerous article. Of course everyone is free to state his/her opinions but I feel like the cost of code maintainance is not taken in consideration at all. As far as I am concerned encapsulation is awesome... and I'm using Verilog those days!

    Proper code engineering is difficult. I've seen more than a company dominate the competitors thanks to well engineered codebase and more than one company biting the dust under the weight of unmaintaneable code bases.

    Share this comment


    Link to comment
    Share on other sites
    4 hours ago, JWColeman said:

    So, as a hobbyist, with minimal formal training in C++ or object oriented programming, how relevant is this stuff to me? Feels like just a lot of discussion waaaaay above my head rn :).

    I started learning OO development ages ago from a Pascal and C background, still in Uni, because I felt my code was disorganized and hard to maintain and there must be a better way to do it. I was but a hobbyist back then, doing small projects (tiny even, if compared to many of the things I did later), but there was already enough frustration that I felt compelled to learn a different style of programming.

    Even for a one-person hobbyist it's still worth it because it really cuts down on complexity of the whole code allowing a single person to tackle far more complex projects and significantly reduces the things a coder needs to remember (or look up later if forgotten) thus reducing forgetfulness bugs as well as time wasted having to rediscover, from the code, shit one did some months before.

    I strongly recommend you get the original "Design Patterns" book from the gang of four (not the most recent fashion-following me-too patterns books) and read the first 70 pages (you can ignore the actual patterns if you want). It is quite the intro to the Object Oriented Design Principles and, already 20 years ago, it addressed things like balancing the use of polymorphism with that of delegation.

    Edited by Aceticon
    Some missing commas were getting on my nerves.

    Share this comment


    Link to comment
    Share on other sites
    6 hours ago, Aph3x said:

    That's fine when you work alone.  Collaborating is a different story though.

     

    5 hours ago, Aceticon said:

    Unsurprisingly it has been my experience that as soon one moves beyond little projects and into anything major, a team of average but trully cooperating developers will outdeliver a team of prima-donnas every day or the week in productivity and quality.

    (And I say this as having been one such "artist" and "prima-donna" earlier in my career)

     

    I agree, with both sentiments. When working in a group or working on a project that has to be maintainable for the unforeseeable future, the whole project should follow a consistent methodology.

    Bad code is bad code though. Trying to go through poorly written OO code is just as difficult as going through a different poorly written methodology. It all boils down to trying to read the mind of another programmer. That's why documentation is so important, even in OO coding. I'd take a logically structured, consistent, well documented code base that doesn't even meet my ideal conventions any day over a mediocre OO code base.

    Maybe the reason why some programmers are less inclined towards OOP is that they lean towards a more free-flowing mind and a less structured mind (while others are the reverse)? I've been sent to places to fix someone else's software too so I do understand where the resentment and frustration comes from, but I haven't seen enough examples to say that OOP is clearly better than any other methodology. I've seen chaotic, bloated OO code and equally confusing non-OO code. Bad code is bad code... and good code is good code, no matter what methodology it employs.

    I suppose when the software industry matures to the point of, say, the housing industry, we'd have "building code" requirements. It might happen one day, but there are so many different languages and coding environments that we're still in the wild west of software development, I feel.

    Anyway, I'm not an opponent of OOP. I think it's a great methodology. It's easy to build bloat and overly complex dependencies, but when followed rigorously and thoughtfully, it's beautiful... I mean, it's well engineered and maintainable! Sorry, "beautiful" is one of those artsy-fartsy words that "you know who" tend to use. 😉

     

    Share this comment


    Link to comment
    Share on other sites

    I've spent the last two years programming in Assembly and C for retro computers such as the ZX Spectrum and Megadrive, and OOP just did not make sense there at all. One quickly learns - the hard way - that memory is a rare commodity and processing power is almost limited to addition and subtraction, with multiplication and division coming in at a very high premium...

    Your program and its functions( or in an object's case "methods" ) need to be split into data preparation and processing, in a top-to-bottom fashion.  Calling even a single function can sometimes bring the program to its knees in terms of performance - local variables and passing data are not free-of-charge where memory is concerned.  And code...boy it takes far more code to do even the simplist of things...

    Returning to OOP is like Marty McFly returning to 1985 - we have an abundance of processing power and memory, and boy are you glad to have kickass sound and graphics hardware to match! We missed you 3D soooo much!  But because we visited the past, we now understand how sloppy we have been with OOP in the present - treating objects like primatives, and creating them on the fly during method calls that are being called during a game loop...

    I can say this with 100% confidence; if you spend time learning a structured language( C for arguments sake ) on limited hardware, alongside your OOP, then you will not go wrong with OOP.

    For giggles, I was interrogated as to why I was using C instead of Assembly for ZX Spectrum programs. It was very much like how this thread has played out! 

    Share this comment


    Link to comment
    Share on other sites

    This article, and the citations it presents, terrifies me. 

    I dread the potential flood of things like: "why are we using that crappy ol' OOP!?", "Duh, data oriented is SO much better and OOP is garbage, I read it on Breitbart once!", etc...

    I'm not even referring to the potential commentary on this site.  In my own professional world, I can see things like this coming up and having to spend energy (that I could be using for other things) on explaining why articles like this need to be ignored and why it'll be a very cold day in hell before I allow the rewrite of our code. Especially to placate those people who've bought into the software development equivalent of fake news.

    As someone said before.  OOP is a tool.  Like everything else, it has a time and a place and needs to be wielded correctly to avoid cutting your (or someone elses) fingers off.  Even principles like SOLID, which I endorse wholeheartedly, are nothing more than a set of guidelines/best practices and sometimes that can run counter to the solution of the problem you're trying to solve, but it doesn't mean you shouldn't try to follow them.

    This kind of thing is basically the equivalent of saying, "hey, my phone has thumbprints on the screen, clearly thumbs are terrible and should be removed entirely and replaced with using my big toe!"

    Personally, I'm of the mind that the only thing that's absolute, the "one true thing", is that you need to employ critical thinking skills when using your methodology/toolset/clothing style/haircut/etc... of choice. Otherwise you will end up making a straight up disaster.

    Share this comment


    Link to comment
    Share on other sites

    I'm currently almost done reading the book Exceptional C++, from Herb Sutter. He says that people often model and implement badly object-oriented code, which may result in bad performances. OOP does not equal to inheritance, by the way. 

     

    After briefly reading your article, it seems you are biased. You seem just personally against OOP but that's just your opinion.

     

    Quote

    The sentiment is good, but in practice, encapsulation on a granularity of an object or a class often leads to code trying to separate everything from everything else (from itself). It generates tons of boilerplate: getters, setters, multiple constructors, odd methods, all trying to protect from mistakes that are unlikely to happen, on a scale too small to mater.

    I agree on that with you for Java and C#, as it is unfortunately some sort of habit of those programming languages. However, for C++, a good programmer will ensure that data is encapsulated when necessary and only in its intended scope. It is for keeping other programmers at bay from fiddling with data that its sole purpose is to be used in an orderly and specified manner by its scope. You must remember that any piece of code will probably be edited by another programmer. Besides, we could say that exposing all data would also generate noise for the programmers.

    Edited by thecheeselover

    Share this comment


    Link to comment
    Share on other sites



    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!