Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 26 Feb 2007
Online Last Active Today, 09:12 PM

#5289359 Is inheritance evil?

Posted by Ravyne on 29 April 2016 - 07:21 PM

A good rule of thumb is to solve any problem with the least-powerful tool it can be (reasonably) solved with. Try not to think of that in a negative light -- by using the least-powerful tool, what we really mean is the one with the least unnecessary dangers attached.


There are a few general power-progressions you should try to observe:

  • Prefer composition over inheritance -- that is, use inheritance only when it supports precisely the relationship semantics you want, not for reasons of convenience.
  • Prefer interfaces (interface/implements, pure-virtual classes) over inheriting concrete classes (extends, "plain" inheritance).
  • Prefer non-member, non-friend functions over member functions, prefer member functions over non-member friend functions, prefer friend functions over friend classes, in C++.
  • Know the differences between private, protected, and public inheritance in C++, and use the appropriate one.
  • Keep things in the smallest reasonable scope.


Those are just a few examples. Being a good engineer doesn't mean being the one who smuggly wields tools of great power, confident you'll not fuck up; Its great when one can do that when they have no other reasonable choice--and you'll still fuck up--but a trait of a good engineers is that they seek out the solutions which are exposed to the minimum set of potential hazards while meeting requirements of (in mungable order) safety, performance, maintainability, usability, and ease of engineering.


Language features are not inherently evil (not even goto), but they are sometimes misapplied and the more commonly misapplied they are, or the worse the repercussions are, the worse their reputation becomes. Sometimes this is exacerbated by the way that languages are taught, as is the case with how inheritance has come to have such a poor reputation. Sometimes its exacerbated by the mistranslation of programming skills from one language to another; in general, a Java programmer (or C# programmer to a somewhat lesser degree) will *way* abuse inheritance if tasked to write C++ (and they'll probably leak memory like a sieve too  :) ).


TL;DR; Know thy tools, and program.

#5289205 RPG item system: storing definitions

Posted by Ravyne on 29 April 2016 - 01:59 AM

It somewhat depends on how homogenous the set of properties is among objects of a certain category (where category means, say, shields/chestplates/blades) -- if the properties are homogenous then each category maps well to a database table or, more simply, a spreadsheet page.

If individual objects are non-homogenous but the range of options is well-defined, then something like XML can be a good fit because it allows variance within a well-structured and verifiable format.

If individual objects are non-homogenous and the range of options is more ad-hoc (in the sense that objects might have properties that are unique to itself), then something like JSON or YAML can be a good fit; these formats are semi-structured -- that is, the grammer and parsing rules are well-defined, but there's no formal data schema like XML. For good and for bad, there's nothing stopping you from putting any data you want anywhere, so long as your program's parsing logic can cope.

#5289135 Returning by value is inevitable?

Posted by Ravyne on 28 April 2016 - 02:01 PM

Also, some compilers can have a lot of trouble in the presence of references to the point that they fail to make seemingly simple optimizations. e.g. a math function that takes two parameters as a reference can't easily tell that those parameters don't alias each other without a more complex post-inlining alias analysis pass in the optimizer and so might generate poorer code than you'd get if the parameters were passed as value types (and then preferably in registers).


Just wanted to note that this is another point in favor of that more-or-less canonical function-call signature pattern (first parameter non-const by value and to be used as return value (and hopefully in a register), second parameter by const reference) -- its trivial for the compiler to know that the arguments don't alias. The same is true of passing both arguments by value (again, hopefully in registers) as well, but if you can't or don't want to (maybe the object is too large, or is non-POD requiring a deep copy) the pattern I showed sidesteps the aliasing issue while mitigating one of the copies at least (if you can afford to pay it more attention, other signatures or techniques might do better, but the canonical pattern is effortless and a good default).



I also want to say quickly that the 'inline' keyword doesn't actually do what most people think it does -- It doesn't force the function to be inlined, and it doesn't even directly "suggest" that that the compiler should inline it (which is what most people think it does). The 'inline' keyword only exists to tell the compiler that the function is being defined inline, and to basically not complain about finding multiple definitions as it will be potentially multiple times as a result of being in a header. Having been defined inline, the function becomes more-available for the compiler to perform inlining, so its a sort-of suggestion in a kind of heuristic sense, but the 'inline' keyword is not itself an expression of intent for something to be inlined by the compiler -- many programmers believe that's what they're saying, but that's not what the compiler understands from it. "forceinline" is closer to what people think they're saying, and depending on compiler settings forceinline is not really forced, but just a suggestion.

#5289129 what is meant by Gameplay?

Posted by Ravyne on 28 April 2016 - 01:31 PM

But in my defense I wasn't trying to say it beats c++ every time I was saying it is "POSSIBLE" for a scriptinglanguage to outperform c++.


But its not possible, not even once, in a fair fight -- with the deck stacked against it, with poor C++ programming, with improper or incomplete library use, sure you can come up with micro-benchmarks that show C++ at a disadvantage -- but there are lies, damn lies, and statistics, right?. Doing the same work in C or C++ will always be as fast or faster than essentially any language, "real" or script. You might, though, have to do some work that's not readily available in the language or in common libraries, and if a technique is readily available in another language that yields more performance per effort-unit, then that's a point in favor of that language -- however, that's a productivity argument, not a performance argument.


And productivity is a damn fine argument for a scripting language. A much better argument than performance, frankly -- which is the point I've been driving at the whole time.

#5288970 OOP and DOD

Posted by Ravyne on 27 April 2016 - 01:37 PM

That's somewhat untrue -- In general, the most broadly-applicable DOD-type data transformations will benefit other platforms even if it is not absolutely optimal. In part this is because details of e.g. cache-line sizes, number of cache levels, associations of said caches, and relative lateness of each cache level through to main memory don't, in practice, have a lot of variance. Cache lines on applications processors are 16-words everywhere I'm aware of. L1 data cache is 16 or 32 KB everywhere, latency of about 3 cycles, usually 4-way set-associative. L2 caches are 256k-512k per core, latency around 10-12 cycles, 4 or 8-way set associative, L3 caches are 2-4MB shared among 2-4 Cores on simpler/slower cores (like PS4/XBONE) or 6-8 MB shared among 4 fast, wide superscaler cores (e.g. Intel i3/i5/i7) 8 way associativity or sometimes full associativity, about 36 cylcle latency, memory latency about 90 cycles if its in the page table, more if not. The prefetcher acts like an infinite L4 cache if your access patterns are well-predicted (linear forwards/backwards is best, consistent non-contiguous strides next-best), with latency not much worse than L3. Real L4 caches, where you find them, are typically a victim-cache. So on and so forth.


But even if there were greater variance, the transformations you make to make good use of any kind of cache are similarly beneficial to any other kind of cache, simply because caches and memory hierarchies are universally more similar than they are different, whatever the fine details may be.


The PS3 is notable in particular for the SPUs in its cell processor, which provided essentially all of the PS3s computational power -- these were streaming-processors, like DSPs, with no real "caches" to speak of (each SPUs local store had similar access properties to a cache, but was all the memory that an SPU could see, DMA was the only way to speak to main memory, other SPUs, or the rest of the system) and as such they essentially required DOD practices to achieve reasonable computational throughput. But developers also found that these transformations benefited scalar/altivec code on the PPU, and in cross-platform titles even benefitted Xbox360 and PC targets. The changes that were necessary and crucial to make the PS3 work as well as it was designed were good for other platforms as well, even when they weren't strictly reliant on such transformations in the way that the PS3's SPUs were.

#5288956 what is meant by Gameplay?

Posted by Ravyne on 27 April 2016 - 12:24 PM

I dont know much about speed of programming languages and such but i know that the scripting language Skookumscript is on its own not faster than c++ but with some optimizations some certain tasks that are completed in "human time" basically meaning completed over a couple of frames, and dont have to be refreshed every tick can in theory perform 100 times better in skookumscript than c++.  http://forum.skookumscript.com/t/skookumscript-performance/500


so it is possible for scripting languages to outperform real languages, but apart from some certain parts of certain languages real languages should always perform better



From the thread you linked:

Fundamentally, well-written C++ will be of course faster than any executed scripting language, but in practice SkookumScript (which itself is written in C++) can beat naive C++ in performance due to its ability to easily time slice operations (meaning code doesn't run every frame but only every few frames).


You have to be careful how you define "outperform" -- One of the creators of SkookumScript, which I have no doubt is very performant for a scripting language, is saying right here (bold) that its a fundamental truth that well-written C++ will beat SkookumScript (he does not even say "highly optimized"), and (italics) goes on to explain that SkookumScript can beat naive C++ (I take that to mean neither architecturally, algorithmically, nor locally optimized C++) because it has built-in time-slicing such that it does less work. While that built-in time-slicing is a nice feature (its an example of those kinds of programming models beneficial for scripting that I mentioned before) and its great to have ready-at-hand, its not an apples-to-apples comparison; You can do time-slicing in C++, you just have to write it (C++ coroutines which unfortunately landed in a Technical Specification rather than C++17 proper, but is already shipping as a preview in VS2015 Update 2, make this almost trivial), and it sounds like he's not even ruling out that non-time-sliced, but optimized C++ could best them.


That you can write straight-forward SkookumScript that will beat naive C++ is certainly noteworthy, and a valuable feature -- but you shouldn't take from that that it "outperforms" even average C++ -- its creators are not boasting that claim.

#5288839 what is meant by Gameplay?

Posted by Ravyne on 26 April 2016 - 04:35 PM


thats actually what led me to asking this question on gameplay as i was wondering why I cant just do it in c++ and not a scripting language

You can but it costs you a 10 to 50 times more lines of code.


Not necessarily -- its true that many scripting languages are compact or have features (e.g. actor-model, prototypal inheritance model) that lend themselves to scripting game entities and game interactions, but that does not mean that writing gameplay code in C++ has to more difficult or more verbose for the people "scripting" the gameplay elements, albeit in C++.


If you were to use C++ for gameplay code, you might not take any special effort if yourself or the entire team is fluent in C++ (and the engine); if you have less-experienced people "scripting" gameplay through C++, then your goal as an engine programmer would be much like any other task -- provide an API, or indeed an embedded domain-specific language, that makes it easy for client code "scripts" to express high-level intent while encapsulating the low-level details away; In practice, this ends up being not much different than the work of integrating a scripting language with your engine, though you might be providing more of the cogs and widgets yourself. The payoff is that, done well, your "scripting" staff gets many of the productivity, expressivity, sandboxing, and hand-holding benefits that stand-alone scripting languages are known for.

#5288836 Is two people enough for a team?

Posted by Ravyne on 26 April 2016 - 04:25 PM


me and my friend are starting work on our game. The goal is to use this project on our resumes as well


When it comes to "experience" on a resume, it means paid professional work experience, not stuff you did on your own.  


If you have a section of your resume devoted to your hobby project where you and your friends built something, and that something did not end up being a commercial success, what you've got doesn't really count except to show interest in the field.


Now if your side project gets a million downloads and becomes a major success, then things are a little different. Then you are an entrepreneur who successfully started your own business, and the experience looks great.  Statistically that is unlikely to happen.



I don't know if its quite so harsh as that. Hiring for a junior or entry-level position, I would assume most companies would expect to find programs from college coursework or interesting hobby projects under the headline of experience. That's certainly what I'd done with my own resume years ago. I concede that one should aim to eject these "experiences" from one's resume with professional experiences as quickly as they are able after some time in the workforce, though. For myself, having taken some non-games job fresh out of school, it was a few jobs down the line before further expansion of strictly-professional experience was more prudent than to include some of my non-professional, but games-related experience.


In general, prefer professional experiences, but bubble up interesting, relevent non-professional experiences/achievements over uninteresting or irrelevant professional experiences. Within a few years or few jobs, you'll have enough varied professional experiences that you'll only be tempted to include non-professional experiences that are truly stand-out -- if you find that this is not the case after several years or jobs, then you probably have had a problem setting career goals and directing your career trajectory.

#5288831 what is meant by Gameplay?

Posted by Ravyne on 26 April 2016 - 04:05 PM

>> As a rule, scripting languages don't surpass "real" programming languages when doing the same work and with both languages free to elect their own optimized solutions;


theoretically, they can't - can they?


In theory, if compiled, its possible, but only in as much as its possible for one "real" language to be faster/slower than another "real" language. In practice, certain real languages have performance advantages over others by virtue of language design decisions, rather than, say, library implementation details.


In practice, even scripting languages that are compiled -- and lets ignore byte-code compiled scripting languages, because those cannot beat a highly-developed "real" language -- to native machine code either have not, or cannot, implement certain kinds of deep optimization techniques -- either because they are too difficult, too costly in terms of compile-time, or are essentially impossible to achieve in a scripting solution that supports hot-loading of code or are not practical/possible to achieve across the run-time/marshalling boundary. Marshaling itself is another speedbump where the scripting language's primitive types are not the host-language's primitive types, and especially where neither language's primitive types are the machine's primitive types (think CLR or JVM primitive ints, who's guaranteed behavior under shifts bigger than the machine word is one thing (IIRC), but the equivalent machine operation differs on each of ARM, x86, and x64). Still more, I'm not aware of any scripting language that exposes things like explicit memory layout of structs to the programmer, which is essential in optimization techniques such as Data-Oriented Design, but systems programming languages do.


In all, practically speaking, no scripting language will ever surpass a systems-programming language like C or C++ or Rust -- if one did, it would mean that they had achieved a breakthrough in compiler technology (and C and C++ compilers are already state-of-the-art, so no small feat). On the other hand, there are slower, natively-compiled languages, like say Swift, which at least currently is maybe half the speed of C or C++, and its possible that a highly-developed, compiled (maybe even bytecode) scripting language could best that. On the other, other hand, I'd say that Swift ought to be (and will become) faster than it is today, and even highly-developed scripting languages are likely to hit a performance ceiling before any "real" language will.

#5288827 Returning by value is inevitable?

Posted by Ravyne on 26 April 2016 - 03:36 PM

Let it also be said that in modern times, a vector class of 3 or 4 elements isn't actually a terribly useful thing -- You have access to some kind of vector instruction set that's at least 4-wide on any modern application processor and so you should most of the time be using those compiler intrinsics directly along with the vector-type supplied by your compiler, and enabling appropriate compiler flags and calling conventions.


If you intend to do that, and you should, then your vector classes end up being a thin wrapper over these intrinsic functions 90% of the time; a wrapper that could obfuscate optimization opportunities from the compiler if you are not careful. 


A vector class can be a useful thing if its your mechanism for providing a non-SIMD fallback or alternative implementations for different vector ISAs -- but other approaches are also viable: conditially-included code (#ifdefs), selecting different source files through target-specific build targets, etc. I suppose you might also elect to use a vector class if your aim is to leverage expression templates to enable vector operator overloads yet still generate code equivalent to the intrinsics, but thats fairly advanced, finicky, and can be brittle.



A matrix class is a more useful thing since matrix operations don't have intrinsics (interestingly, the dreamcast had a 4x4 matrix-matrix multiply instruction, though it had latency equivalent to 4 4-element vector-vector operations), and it provides a good home for bulk-wise matrix-vector (and matrix-point) transformations.

#5288817 OOP and DOD

Posted by Ravyne on 26 April 2016 - 02:54 PM

To reiterate again, OOP is not at odds with DOD. Its entirely possible, likely even, that your DOD code might employ classes, inheritence, even virtual functions in some capacity, even if not in the exact same capacity that a DOD-ignorant OOP progam would. Separately, some parts of your code -- the most computationally-intensive parts, usually -- will benefit most from DOD (and often lend themselves fairly naturally), and other parts of your code will not benefit and perhaps fit a DOD-ignorant, OOP style more naturally. In programming, we don't choose one approach and apply it to the entire program. Its natural and common that some parts of a program will appear more like OOP, functional, or procedural -- we as programmers are left to choose the best approach taking into account the requirements, what language features are available to us, and how we intend to weld these parts together. DOD exists on a separate axis, and can be freely mixed into different parts of the program as needed, regardless of the programming paradigm we'll leverage. Choosing to approach some part of the program from a DOD mindset does have an impact on how you utilize those paradigms, but you tend to think of it as just another requirement that has to be balanced -- it doesn't come crashing through the wall demanding that you can no longer use such-and-such language feature, or that you have to use such-and-such other feature.



By way of example, take the typical OOP approach of having a particle class -- position, mass, velocity, color, lifetime, etc -- and having a collection of those particles -- basically as Frob described earlier. If you were mistaken about DOD and assumed it merely meant "looks like procedural" you could separate the data into C-style structs, and have free functions to operate on them, but that won't be DOD because it didn't rearrange the data, it just rearranged the source code.


A more DOD approach, would be to transmute the multitude of particle objects, represented by the Particle class, into a Particles class that owns all the particles -- now you have arrays (vectors, more likely, but contiguous and homogeneous in any case) of data -- postions[], masses[], velocities[]. colors[], lifetimes[], etc[] -- now, you've re-arranged the data, but you'll notice that this Particles thing still lends itself very well to being a class -- there's not a C-style struct in sight, and you're using std::vector, and you might inherit ExtraCoolParticles from Particles, and you might use a virtual function to dispatch the Update method (its true that DOD prefers to avoid virtual dispatch, particularly in tight loops, but its still sometimes the right tool at higher levels of control).


Moreover, you might notice that mass and velocity are almost always accessed near to one another, and the same for color and lifetime; it could be the case that a better arrangement of the data still would be positions[], masses_and_velocities[], colors_and_lifetimes[], etcs[]. Only profiling will tell you whether this is *really* the better arrangement, but its possible. One element of DOD is separating hot data from cold (that is, frequently-accessed from infrequently-accessed) which is essentially always a win because it leverages caches and pre-fetching better, and another element is to consider grouping desperate elements that are frequently accessed together which is sometimes a win, and sometimes not -- but neither of these say anything about what programming paradigm is employed; its a distinct consideration.

#5288804 Returning by value is inevitable?

Posted by Ravyne on 26 April 2016 - 01:37 PM

In general, I find (and I think this is generally accepted) that this kind of function signature is best --


Vector add(Vector vector lhs, const Vector& rhs)
    return lhs += rhs;


Basically, you pass the first parameter as non-const by value, and then use it to return the result. The return-value optimization nicely removes any overheaad. Another important advantage is that, for operations with self-assigning equivalents ("+" and "+=", "-" and "-=" and so on) you can use this pattern to implement the non-self-assigning version in terms of the self-assigning version; this means that you only have to maintain one implementation of the formula, and also that only the self-assigning version needs access to the class internals -- you can (and should) implement the non-self-modifying function as a non-member, non-friend function within the same namespace as the class.


The cross-product, because it would normally reference variables from "lhs" that will have been overwritten (and also because a self-assigning version is uncommon) is a bit of a special case that doesn't lend itself to this pattern ideally. You can repeat this pattern and store some elements off to the side in locals as needed, or you can pass both parameters in by const reference, using a local non-static value to hold and return the results as Juliean suggests. Either method will leverage RVO to eliminate extraneous copies.

#5288682 OOP and DOD

Posted by Ravyne on 25 April 2016 - 05:57 PM


In my opinion, and I'm assuming we're talking about high performance software development and C++ (since you've tagged the thread with this language), use DOD whenever possible ...


Let me expand a bit on this -- DOD is really the art of wringing utmost performance from a set of hardware that has specific, real-world characteristics -- machines have a word-size, a cache-line size, multiple levels of cache all with different characteristics and sizes, it has main memory, disk drives, and network interfaces all of which have specific bandwidths and latencies measurable in real, wall-clock time. Furthermore it has an MMU, and DMA engines, and it has peripheral devices that require or prefer that memory objects used to communicate with it appear in a certain format (e.g. compressed textures, encoded audio). Because of the already large -- and still growing -- disparity between memory access speed and CPU instruction throughput, it has been a lesser-known truth for some time that memory-access patterns, not CPU throughput or algorithmic complexity, is the first-order consideration for writing performant programs. No fast CPU or clever algorithm can make up for poor memory access patterns on today's machines (this was not the case earlier in computing history when the disparity between memory access speeds and CPU throughput was not so mismatched; I would estimate it has been the case since around the time of the original Pentium CPU, but hadn't become visible to more mainstream programmers until probably 10 years ago, or less).


If performance is critical, DOD is the only reasonable starting point today. Period. End of Story.


But one must have a reasonable grasp of where performance is critical -- it would be unwise to program every part of your program at every level as if DOD is necessary or desirable in the same way that writing the entirety of your program in Assembly language would be -- in theory, you might end up with the most efficient program possible, but in practice you'll have put an order of magnitude more effort into a lot of code that never needed that level of attention to do an adequate job, and you'll have obfuscated solutions to problems where other methods lend themselves naturally. For instance, UI components would gain nothing by adopting DOD, yet a DOD solution would likely give up OOP approaches that fit the problem so naturally that UI widgets are one of the canonical example-fodder used when teaching OOP.




... and OOP when forced to because (even though I'm not sure if DOD has been formally and completely defined) what comes to mind technically when thinking of it is that it help us to tackle a couple of problems with OOP:

1. Inheritance abuse (including CPU costs of virtual function calls although generally that is an optimization).

2. Cache wastage through composition abuse and inheritance.

3. Destructors, constructors, member functions, member operator overloading, etc. leading more functional code writing instead of OOP.

Technically, as been stated before, the main result that you get from this is more POD and less objects, sometimes automagically achieving a better memory usage. Ultimately, you want to balance these things so that your only reason to use the (few) advantages of OOP is convenience.


Yet, its important to maintain awareness that OOP and DOD are not necessarily at odds. You can't, for example, answer the question "what's DOD?" with "Not OOP." Whatever programming paradigm(s) you choose to adopt, its prudent to select and leverage what features it can offer in service of DOD, for the parts of your program that adopt DOD. It might not be possible to write a DOD solution that looks exactly like a typical OOP solution, but its very possible to write a DOD solution that looks *more like* a typical OOP solution than like a typical Procedural solution. Again, DOD is (and must be) prime where you have deemed performance to be critical, but there are no language features or programming paradigms that it forbids; like all things in engineering, there must always be a considered balance of competing needs.

#5288668 what is meant by Gameplay?

Posted by Ravyne on 25 April 2016 - 03:58 PM

For what its worth, usually when scripting languages -- even compiled ones -- make claims of being as-fast or faster than C or C++ or whatever language they might typically be embedded into its usually dubious. They compare features that the scripting language has meticulously optimized for against naive implementations in the language they're comparing against, or they're comparing library functions that might be used in similar situations in either language but that do wildly-different amounts or kinds of work underneath. As a rule, scripting languages don't surpass "real" programming languages when doing the same work and with both languages free to elect their own optimized solutions; its uncommon even for a scripting language to match a "real" language in performance under these conditions. That goes doubly so when the language they're comparing to is a "bare-metal" language like C, C++, Rust, or others.


I'm not correcting this misconception as an academic argument. I'm correcting it because its common to fall for the siren-song of performance when selecting a scripting language. While it may be sometimes convenient that you gain the flexibility of choosing to implement a particular bit of performance-intensive code in your scripting language (either because it saves dropping into a harder-to-use language, or because it avoids crossing run-time/marshaling boundaries), it is most often a better idea to implement that functionality in the language of your engine and expose it to scripts as a service. Thus, if you overvalue this ability in a scripting solution, you might be compelled to give up ground in features that are far more important in a scripting language, such as productivity, ease-of-integration, how widely used it is, or whether its programming model supports the kinds of interactions you need to model in your game-play without creating a lot of that infrastructure yourself.


TL;DR; Performance is rarely a noteworthy consideration for things you should consider scripting to begin with. If you've chosen a scripting language with performance as your primary concern, you've probably traded away more worthwhile features to get it.

#5288658 OOP and DOD

Posted by Ravyne on 25 April 2016 - 02:44 PM

As others have pointed out, what you're calling DOD is more akin to procedural-style programming, as typical of C code. You can do OOP in C even, you just don't have convenient tools built into the language for doing so. Likewise, you can do actual DOD using OOP techniques or procedural techniques, or functional or other techniques as well.


When we talk about Object-Oriented, Procedural, Functional, Declarative (and more) styles of programming, we typically call those programming paradigms -- a language that is designed to fit one (or maybe blend a few) of those paradigms typically has language-level features and makes language and library design decisions that support and encourage programmers in leveraging a certain mindset when expressing their solutions at the level of source code.


As of yet, I'm not aware of any language that adopts Data-Oriented Design in the way that, say C++, adopts Object-Oriented Design, and I (and most people, I would assume) tend to think of DOD existing on a separate plane that's mostly orthogonal to the plane where OOP, Procedural, and other programming paradigms exist. This is because actual DOD isn't really about how a programmer maps their solutions to source code, its really about how their solution maps to realities of hardware with an emphasis on what data belong physically-together and how it flows through the program logic. DOD says that this mapping from solution to real hardware is more important than the mapping from a programmer's solution to source code -- thus, in DOD, hardware realities drive the solution, and the solution drives the source code. This is the reverse of the typical approach, where programmers do not typically deeply consider the realities of hardware (indeed, some schools of programming actively discourage such considerations) or, if they do, attempts to retrofit hardware considerations as optimizations after the program structure, according to whichever programming paradigm, is already crystallized and difficult to fundamentally change. DOD has to be considered from the start since it dictates how your data will be organized and how it will flow, at least for the processing-intesive parts of your program that will benefit from it; DOD can't be an afterthought.


On OOP, one of the troubles is that what's taught as "OOP" in books and in college classrooms tends to be a very shallow and dogmatic view of it. Most colleges today teach OOP using Java which as a language is particularly dogmatic (there are many reasonable choices which the language simply disallows a programmer to make because the language designers deemed their one-true-way as automatically superior), and not to mention needlessly verbose because of it. Thus, Java is all of OOP many people know when they exit college, and they go on to program in C# or C++ or other "OOP" languages as if they were Java.


Java has no free-standing functions, Java has no operator overloading, Java is garbage-collected, Java is intrinsically independent of any real hardware by creating a fictional homogeneous virtual hardware platform.


Java made choices largely opposite of C++ even though they look superficially similar. C++ has free-standing functions, supports operator overloading, is not garbage collected (or even reference counted by default), and does not make itself independent of real hardware, but defines where those differences may appear explicitly (simultaneously discouraging, but allowing, reliance on such platform-specific behavior). These are just a few examples, and both languages have their place, but it should come as no surprise that programming either one as if its the other, where its even possible, does a disservice to the program. Its like trying to speak Spanish by mixing Spanish words with English rules for grammar -- you might be able to communicate your ideas in the end, but you sound like an idiot and everyone wonders why you seem so overconfident of your ability to speak Spanish.


In C++, for example, its good practice for a class to be as small as possible, containing only the member variables necessary and only the member functions that must be able to manipulate those member variables directly. What's more, in C++, free-standing functions inside the same namespace as a class, if they operate on that class, are every bit as much a part of that class's interface as member functions are because of how C++ name-lookup and overload resolution work (see: Koenig Lookup). In Java-style OOP, this cannot be because the language says that every function must be part of a class -- and as a result every function can manipulate member variables directly even if it doesn't need to (Java's approach is worse for encapsulation, and makes testing more difficult in the same kind of way that global state does). This one difference makes good, idiomatic program design fundamentally different between the two languages -- all of this is kind of a long way of saying that even within "OOP", there are different, competing flavors that dominate in one language or the other. Finally, while I have no love for Java, I do not mean to leave you with the impression that C++-style OOP is the best style of OOP -- C++ happens to be a particularly popular and mostly-good blending of OOP with control over low-level hardware concerns which, combined with is mostly-C-compatible roots, has made it very attractive for game development and other computationally-intensive domains where efficient hardware utilization pays dividends -- C++ is not even a "pure" form of OOP, and many computer scientists argue that languages like Simula (the first OOP language) and smalltalk (another very early OOP language influenced by Simula) have never been surpassed as examples of the OOP programming paradigm. 



In the end, the best programs tend to balance pragmatism with just-enough looking-forward. Programs that see the light of day tend to do only what they need, without caring overmuch about how pretty or fast or ideologically-pure they are. At the same time they avoid painting themselves into a corner -- too much specialization too soon, in the wrong places, or without good reason often ends up as wasted effort when it proves inflexible in the face of necessary changes later on. There isn't a formula for this balance, its something you gain a feel for through experience and to a lesser extent by learning from others who are experienced. Its the art of knowing when "better" has become "good enough", and accepting that after this point, "better still" is rarely a justification unto itself. Its accepting and even embracing that we will never know more about a problem now than we will know in the future, and not making big bets on unknowns, for or against (as a side-note, this is not at odds with DOD, since hardware details are known and immutable).