What's wrong with OOP?

Started by
25 comments, last by Madhed 12 years ago
Recently I read many leangthy articles say how bad OO is. Among those are Paul Graham's and he series of articles on this site: http://www.geocities.com/tablizer/oopbad.htm (I haven't read all, since I don't have much time). Basically, everything about OO is bad. I failed to see. It may not be the perfect solution, but it's not as they claim. I read a more objective article: Modular Programming Versus Object Oriented Programming (The Good, The Bad and the Ugly), and the author made a good point that certain paradigm is more suitable for certain domains. For example, OOP is very suitable for multimedia and entertaiment industries such as "

Music software designers, Recording studios, game designers, even book publishers, and video production groups" . He pointed out that people in these industries tend to think in term of objects more.

What's wrong with building abstraction by encapsulate data and behavior into class, and provide the class as a package to be used as it is without worrying about the details? C does this as well. In C, we also specify a set of interfaces to the client in .h file, implement it in .c, and if the source is proprietary, we can always provide the interface only.

What's wrong with classification? How can you program a game in a procedural or functional way, where it's counterintuitive to model objects in real world? Even Lisp supports OOP as well. In C, we can define low level struct, and if we want to transform one struct to another, we have to base on the memory layout of each struct. We would end up writing OOP feature for automatic transformation anyway.

Because we are talking about paradigm, so we should not be specific about one language, so we don't say thing like the object model in Java is dictated by Object, or C++/Java is too verbose.

Finally, object oriented or whatever is just a way to organize source code. Instead of millions lines of code in our main, we divide it into smaller units and store it different locations (files), and the main only uses a nice interface from these modules (which is usually only one line) to invoke certain functions when needed. The act of dividing and organizing code into logical entities (class, functions, struct...) and physical entities (files, directories) is a logical (science) and creative (art) task. I don't think one paradigm is suitable for every situation.

Can anyone, especially from anti-OO camp explain this to me?

Advertisement
Because objects in oop rarely work well modeling objects in the real world. Real world objects have too many classifications, or too imperfect classifications to model well in code. Inheritance tends to never work properly. And simply, not everything is an object.

Modern use of oop tends to not be pure. Standalone functions mix with objects. C# has some functional programming bits now at its core for all sorts of method implementation.

But frankly these arguments are often put forward by academics who don't put proper weight to how easy it is for less skilled programmers to think in terms of objects. Objects are particularly inelegant from a computational perspective, so have a very bad reputation with people who focus on that as opposed to Getting Stuff Done(tm).
Among those are Paul Graham's and he series of articles on this site: http://www.geocities...izer/oopbad.htm (I haven't read all, since I don't have much time).


Maybe instead of writing an article about why OOP kills baby seals, the guy should have read an article about web page design.

Anyway, common sense says that a programmer who rejects OOP on principle is just as silly as one who won't consider using anything but OOP. There are problems that quite obviously benefit from the use of OOP while there are others that just as obviously benefit from the use of other paradigms. One doesn't have to waft through a dozen pages of diatribe to understand that.

Recently I read many leangthy articles say how bad OO is.

I'd suggest reading articles from open-minded professionals about pros and cons of OOP and procedural programming instead..
Nothing wrong with OOP so long as the problem you're modelling is appropriate for being modelled by OOP. Within that constraint there is only one thing about OOP that really bothers me, and even then it would be fair to say that it's more of a people/culture issue than an issue directly arising from use of OOP, and that is that it can lead to overengineered designs and a tendency for implementation of design principles and patterns taking precedence over the problem that those principles and patters are intended to solve.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

OOP is the worst design methodology. Except for all the others.
Like others said, strict OOP is probably a bad idea since it isn't smart to artificially limit yourself to one way of thinking if you don't have to. The issues with it are well summarized above. Some of the concepts that came from OOP are so so so useful though, and that's what sometimes gets ignored when this comes up.

Normally, would you rather write an abstract (interface) class and multiple implementations, or would you rather write the functions free-standing and then create arrays of function pointers which then have to be maintained much more carefully?

How about RAII? That requires one to at least put a foot in the door of OOP, even if that's as far as you go.
OO just has a lot of pitfalls that you can become a victim of, because of the way that it makes you think about data in memory.

For example -- Often, different member functions require different sub-sets of the member variables, and often certain members variables are only used by a small number of member functions. However, if these is all bundled together into a class, then you end up with sub-optimal memory layouts, leading to bad cache usage, leading to bad performance.

Another example is that packaging up all related data into an object forces a particular memory layout upon your users. Often, if someone is uses multiple instances of a class, there is a more efficient memory layout possible for the "more than one object" case, than OO's "design the layout for a single object" method provides.//holds a value, which can be marked as 'invalid'
template<class T> struct Maybe
{
T value;
bool isValid;
}
//example usage
class Foo
{
Maybe<int> i;
Maybe<float> f;
}
//which results in the layout
struct Foo { int i; bool iValid; /*char pad[3];*/ float f; bool fValid; /*char pad[3];*/ };

// ideally, we would be using the following layout, but OO makes this difficult
struct Foo { int i; float f; bool iValid; bool fValid; /*char pad[2];*/ };

Normally, would you rather write an abstract (interface) class and multiple implementations, or would you rather write the functions free-standing and then create arrays of function pointers which then have to be maintained much more carefully?


Depends on the problem being solved. Sometimes (a lot of the time) the "arrays of function pointers which then have to be maintained..." consideration just does not even exist. In that case I'd much rather write a freestanding function and call it as required than have to implement all of the standard OOP baggage around it. On the other hand, where that OOP "baggage" makes my job and life easier, then bring it on! :)

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

The backlash against OOP is justified in some ways but misguided in other ways. Applying OOP to some problem which doesn't have a natural object structure is a mistake, so it is also a mistake to design languages that more-or-less force you to do this, such as Java. However, when it's a good fit it can be extremely elegant.

I think what happened is that when OOP was born, it grew up hand-in-hand with a large software problem for which it was a particularly good fit and this led to a view that object-oriented design was the solution to all large software problems. The "large software problem" that I am talking about is the implementation of graphical user interfaces. OOP makes sense for this problem because it is easy to view GUI widgets as a hierarchy of objects that inherit from base classes that are composed together. You have a base widget class that supports abstract methods for the ways a user might interact with a widget then a button is a kind of widget, a check box could be a kind of button, and so forth. Then users of the widget framework can do "smalltalk-style" OOP in which to make a button to do something application-specific they inherit from an abstract button class and provide a button click handler that does something application-specific. This kind of object-oriented design, which I'm calling smalltalk-style, is not in current vogue, but I think it was the original idea about the way this sort of thing was to be done.

All of that kind of stuff came out of Xerox PARC in the 70's where I think Alan Kay was at the time and Alan Kay is considered one of the fathers of OOP so go figure.

off-topic: I'm looking at the links provided in the original post ... Geocities sites still exist? I thought they were all shut down years ago?

This topic is closed to new replies.

Advertisement