Just because you're using components in some places doesn't mean you have to (or should) use it everywhere. Most of the time when people say they're using the component-based architecture, they just mean that their game-objects (Entities/Pawns/whatever-you-want-to-call-them) are implemented through components -- they don't usually (ever?) mean that all their underlying subsystems use or are components too (though, usually, there is a game-object component that corresponds to most of the subsystems). For example, Unity3D uses a component-based architecture in this sense -- GameObjects are bags of componenets, but there's an underlying engine that isn't.
Banner advertising on our site currently available from just $5!
RavyneMember Since 26 Feb 2007
Offline Last Active Today, 11:09 AM
- Group GDNet+
- Active Posts 4,266
- Profile Views 16,303
- Submitted Links 0
- Member Title Member
- Age 31 years old
- Birthday June 10, 1983
Aside from game development: Computer Languages, Old school gaming (Particularly RPGs), Embedded Systems, Electronics.
Outstanding Forum Member
Posted by Ravyne on Yesterday, 12:48 PM
Using a pointer to the data to be held in each list node made all the difference. Now it's as slow as the C90 version.
That falls into under the "cache-thrashing to cache-friendly" clause then. The extra indirection has to reach out to different memory, and that puts more pressure on the cache and might be preventing the prefetcher from doing its thing too.
This is a good example of why C90 doesn't get you any free mojo -- the way to do a generic linked list in C is with void pointers, that's what its likely doing here. In C++, through templates, you can specialize the behavior of a data structure based on properties of its type parameter, while still presenting the same interface to the client code. Here, the STL version was able to either store the value itself inside the node (if the value type is small) or use contiguous blocks of memory. The STL uses "tricks" like this everywhere to provide better general-case performance when the type in question is amenable to it. std::shared_ptr does the same thing -- small data values are stored inside the shared_ptr control block and this
saves a pointer indirection a cache penalty like you are seeing ins std::list, but if the data value is large, it uses a control block that has a pointer to an external data value stored somewhere else. This optimization is enabled when you use std::make_shared, and uses type-erasure to be entirely transparent to the programmer.
Behold the power of templates. To achieve similar performance in C, you'd have to write a separate list-node for each small data type you wanted to use, and the whole set of linked-list functions to operate on it.
Posted by Ravyne on 29 January 2015 - 05:53 PM
On second reading, I may have the STL implementation swapped in my head with your custom one. Regardless, the general points still stand. Your custom implementation is slower/faster than the STL implementation because your benchmark is broken, you're using something suboptimally, your implemenation is buggy, or your implementation does more/less than the STL implementation.
And again in any case, if your MSVC++ 2012 implementation (By which I take you to mean C++, as implemented by Microsoft circa 2012) is slow -- and you have the right optimization settings enabled -- you aren't going to gain any voodoo by sticking to arcana of C90.
Languages for the most part aren't slower or faster (All the popular ones, with modern compilers/JITs/VMs, fall in roughly the same order of magnitude) -- only programs are slower or faster. When you see performance differences larger than an order of magnitude, its almost certain* that you're either measuring it wrong, doing something dumb, or are trying to compare apples to oranges.
* For completeness, going from a cache-thrashing solution to a cache-friendly one can give very large speedups, but the two are such different beasts that I would generally consider that to fall under the apples-to-oranges clause.
Posted by Ravyne on 29 January 2015 - 05:18 PM
C90 isn't really going to get you anything for free, neither is a custom implementation in any language.
Whatever you're testing against most likely appears slower for one of a few reasons:
- You're testing the standard implementation incorrectly (e.g. You very likely have checked iterators and debug mode enabled in your test. Disable them)
- You're using the standard implementation incorrectly (e.g. You're not warming it up, but you are warming up yours -- such as by preallocating storage)
- Your custom solution is implemented incorrectly (bugs can sometimes be fast)
- Your custom solution is less robust (doing less is always faster)
You can in general write a custom data structure that's faster than the standard ones, but only by sacrificing features, correctness, or robustness. Lots of very smart people write those library implementations, its unlikely you're outperforming them without cutting corners somewhere. Sometimes you know which corners you can cut, and that's fine if you can accept the brittleness that introduces in your code (e.g. you might make a design change that changes an assumption your "faster" implementation relies upon).
For myself, I always try to use the standard containers for as long as I can, and as well as I can unless I have a very good reason otherwise (e.g. I can't use the STL on the platform) -- the correctness and robustness they provide while your code and program structure is still in flux is invaluable. I consider replacing them only as an optimization when profiling shows that they are falling short and can carry me no further -- almost always by the time this is even possibly the case, the program structure has crystallized and I have a good idea what my needs are, what won't change, and therefore what corners I can cut safely without screwing myself later on.
Remember: Premature optimization is the root of all evil.
Posted by Ravyne on 29 January 2015 - 05:04 PM
What you're doing with swap/pop_back is the general pattern, however it isn't stable (which means the order of the objects in the vector are changed relative to one another, which sometimes might cause you trouble). If you don't need the vector to be stable, this is fine.
If you need stability, then you either have to remove the element and allow the vector to compact itself -- or -- you can use the mark-and-sweep pattern, where you mark the element as inactive but leave it in place, and then you sweep through at a more convenient time such as between frame processing. During the sweep you can use std::remove_if with a predicate that looks at the inactive flag. During processing you'll need to check the state of the element unless you know you haven't made anything inactive since the last sweep.
Posted by Ravyne on 29 January 2015 - 12:13 PM
Even if you're going to stick with a language like C++, it's immensely beneficial to learn many languages of different styles, with their own paradigms and idioms.
Just to be clear because taken out of context my previous posts could read as my being an OOP-with-C++ cheerleader, I'm quoting this for emphasis. In my own programs I freely mix OOP, Procedural, Functional, and other influences. Its important for people to remember that C++ is *not* an OOP language in the same sense that the first several version of C# were, and Java almost entirely is even today. C++ has support for multiple paradigms and their mixing of techniques and design influences are visible in the C++ standard libraries even from early days. I have difficulty thinking of even one programming language who's practice is not made better by adapting techniques outside its primary paradigm/s. Sometimes a language that does one paradigm incredibly well is a very powerful tool to have, but it just isn't suitable to every programming problem. As such, my yardstick for programming languages tends to measure how much easily the language lets me solve problems in the way I see fit without its agenda getting in my way, and thus my preference for languages that give me more discretion as a programmer (note that a language that puts some principled restriction/burden on the programmer is acceptable if the benefit is worthwhile. I think Rust is an interesting language, though it remains to be seen how practical it is for the kind of low-level programming in games and such. I've just begun experimenting with it myself.)
Posted by Ravyne on 28 January 2015 - 06:44 PM
Now that might be just me but it sounds awfully like a "dey took our jibs!" reasoning there. Must be those filthy Java programmers, et cetera. Also you seem to assume colleges teach "idiomatic Java", I'm going to tell you they don't. They teach generic OO concepts, often badly, no matter the language they end up actually using.
I probably should have been more careful in how I chose my words. Java's never taken my jerb, but I do think its a poor language and yes I do have a bit of a soapbox about it. When I say "idiomatic Java" or "idiomatic C++", I don't necessarily mean the things that really are best practices or what's taught in schools, I mean the things that the culture around that language broadly accepts as sacrosanct and to be revered and striven towards at all levels, not just experts. In java, the enterprise is largely what drives the culture -- Java, the language itself, was literally designed to make Java programmers interchangeable because that's attractive to big business. That's the reason Java gives so much less discretion to the programmer than C++ does, and disallows such anti-social behaviors as operator overloading and the heresy of free-standing functions. Business would prefer if all their programmers wrote the same kind of code, and Java does what it can to enforce that at a language level. Because programmers who have been weened on Java have never been afforded such freedoms, it remains difficult for them to utilize new-found freedoms in a language that affords them the privilege, and so the opportunity is ignored or distrusted by these programmers even when they are objectively a preferable solution.
Some schools, many no doubt, probably don't teach strictly from best-practices. Certainly I was exposed to ideas in my C++ courses that are not good practices for software at scale, usually as a contrived means of demonstrating one language concept within a bubble -- I recall a particular assignment that required us to use exceptions as a flow-control mechanism. But the choice of language does have an impact. In Java, you cannot teach a student that free-standing functions are preferable because the language doesn't truly allow them -- sure, you have static methods, but that starts to fall apart when the static member is equally related to two or more types: where do you put it then? In Java (and in C#) you see people creating new classes whose only purpose is to be a bucket for static methods. How silly is it that one has to pay penance to the type system to achieve such a simple request as a free-standing function? How tragic is it that for most college graduates, this is the solution that becomes their muscle-memory and is something they must unlearn to be an effective C++ programmer. That design decision has nothing to do with good OOP practice, and everything to do with the particular orthodox worldview that Java's proprietors set forth in their market analysis. To be fair, Java serves that market expertly -- it faces essentially no competition from C++ at all in that space, it competes mostly with C# there.
I think the thrust of what I wanted to get across was that to become a truly enlightened OOP practitioner, you have to be willing to be a heretic if your background is Java. It requires you to actively reject certain notions that are enshrined in the Java language. For its part, C++ is not perfect and I'm not saying that teaching C++ in colleges is the cure; I believe C++ to be better in many ways (owing mostly to its lack of far-reaching orthodoxy and its disregard of marketing strategy), but it also presents it own challenges, owing mostly to its C legacy and its incrementally accrued complexity. Its really not that C++ is the ideal expression of OOP, its not, but its the least-restrictive option we have among these three langauges (C++, Java, C#) that are widely used in industry and a broadly-marketable skill. Certainly I am also biased as C++ is the only one among them that lets me express in greatest detail how I want the machine to carry out its work (down to the level of how and when to allocate and reclaim memory, or how to organize and pack data efficiently for machine access) while also giving me the expressiveness to express solutions at a high level in the manner I choose.
Personally, I use exceptions in c++ only for throwing errors from failed constructors. As ctors cannot return a value for obvious reasons this is the common sense way to raise an error. Anywhere else return codes are simpler and easier to work with...
Exceptions, at least as popularly used, are not a laudable feature of C++ or any other language. They're particularly troublesome in C++ though because it lacks a garbage collector (technically, the C++ standard allows for one, but no one offers one I'm aware of). That's why C++ programmers have to deal with this whole business of functions being exception-unsafe, or being exception safe with the weak exception guarantee, or being exception safe with the strong exception guarantee. Return codes are viable when the issue can be dealt with immediately in the calling scope, but they start to break down otherwise -- you end up seeing solutions where the error code is simply stored in some global and a panic() function is called when the error needs to escape more than a couple stack-frames. And every level the "exception" must escape has to have its intent polluted by error handling code. That's actually the reason for exceptions to exist in the first place -- to allow for intermittent errors to be handled without manually accounting for them in the flow-control of your program. If it weren't for the difficulties that stem from the combination of manual resource management and exceptions in C++, exceptions are a fairly good solution for this problem(exceptions work much better in garbage-collected languages like Java or C#), but still always work best when used sparingly.
Also, if we are limiting ourselves to return-codes, how then do functions return other values? Any function that can fail must then use an out-parameter to return its result, which is not convenient and leads to code that can be difficult to follow. In other languages, functions can return multiple values, which alleviates some of this problem, but that's something C++ lacks a good solution for -- it doesn't have multiple return values and lacks a "value bag" type object (such as a tuple) that is both computationally and semantically light-weight (std::tuple is nice for what it is, but isn't so light you'd want to reach for it as the solution to all your essential error-handling). std::option<T> will be sufficiently lightweight I think (when its officially adopted), but its capacity to communicate an error is limited to expressing only whether there was an error or not, it can't on its own express what the nature of the error was.
Posted by Ravyne on 28 January 2015 - 12:47 PM
Yep, Gimp isn't lacking in features so much as its lacking in UI polish -- and even then, not so much because the interface is terrible, but because its interface is not the photoshop interface that most users of such software know. It also lacks somewhat in optimization, and I don't think its able to utilize GPGPU acceleration as much as photoshop (maybe even not at all?).
Paint.net might also be a viable no-cost option, depending on what you need. And Mischief has been getting a fair bit of attention lately and is only $25 bucks -- its not by any means comparable to photoshop for all uses, but a number of art people I know have expressed their preference for Mischief over photoshop just for general sketching and concept work (as opposed to production art work).
Posted by Ravyne on 27 January 2015 - 06:41 PM
Part of the bad rap that OOP has can be tracked down to the prevalence of Java in Western college curriculum. Java, as a language, adheres to a particularly conservative orthodoxy of what "OOP" is -- everything is an object or part of one, the programmer can't be trusted to override operators in a sensible way, everything is garbage collected, don't think about the cost of abstractions, use lots of exceptions -- in other words, many of the things people call "typical C++ bullshit" or "typical OOP bullshit" is actually "typical Java bullshit" that's been drug into C++ by recent grads or Java refuges who don't know how to write idiomatic C++ and so write idiomatic Java in C++ instead.
Java's idioms are fine for Java, such as it is, but idiomatic Java is bad C++. In idiomatic C++, we prefer non-member functions over member functions (Java calls them Methods); in idiomatic C++, we trust the programmer to override operators and other such anti-social things; in idiomatic C++ we require that a programmer thinks about resource lifetimes and encourage them to think about the cost of their abstractions; in idiomatic C++, exceptions are to be used sparingly in truly exceptional (usually non-resumable) circumstances.
Now, C++ is by no means perfect as an OOP language or otherwise. Its got warts, it drags a lot of legacy along, its terribly complex, and its far corners are dimly lit for all but a select few programmers (usually the types who attend C++ standardization meetings, implement compilers, or are in the business of selling library implementations). Additionally, exceptions and manual resource management mostly go together like oil and water -- you can't just put the two in the same container and expect it to work out, they only combine well when you take great care in the combining.
Another part of the OOP-is-bad legacy, as relates to C++, is that C++ was the first really mainstream language to popularize OOP, and it did it in large part by dragging procedural (that is, C) programmers into OOP through C++. The existing OOP programmers at the time, the guys using Smalltalk or Simula, were not overly-impressed with C++ because it lacked alot of the OOP richness they were used to in the languages they already had -- those languages weren't commercially successful on the scale that C++ was, but there wasn't a mass exodus to the newer C++ nonetheless. Bad OOP/C++ practices that stem from this bloodline (as opposed to the Java bloodline) tend to manifest as C-with-classes-style programs, which look and act like a procedural program design that's been shoehorned into objects, usually with little regard at all for design proper.
But finally, be careful not to think of C++ as an "OOP language" because its not. C++ is a multi-paradigm language that supports and combines procedural, OOP, functional, and macro concepts into one language. You can use C++ in any of these ways, or all of them, the choice is yours.
As for Data-Oriented-Design, I submit to you that it is not in fact in opposition to OOP. Instead, I think of them as orthogonal and complimentary. Objects remain an important and solid way of organizing DoD code and representing its structures. You only have to change your perspective about what constitutes an object in your design language. DoD is effective and important for maximizing performance on today's machines -- when you need to unlock the full potential of performance, DoD is a necessity in any programming language or paradigm (In my view, DoD is more of an applicative pattern than a paradigm, because its a way of expressing how our program solutions map to hardware and not a way of expressing how our thoughts map to our program solutions).
Posted by Ravyne on 27 January 2015 - 06:07 PM
I agree with the general sentiments expressed -- Certain games only work in certain orientations, and for those its fine to just assume that orientation. For fixed orientation portrait games this means you can probably ignore orientation altogether, but for fixed orientation landscape games you probably want to support both clock-wise and anti-clock-wise 90 degree rotations, as the controls on some mobile devices might otherwise interfere with where the player wants to rest their hands. Luckily you can do this with a little considerate coding, and you don't need different art/layout assets to achieve this.
But if a game works equally well in either orientation, its best to let the player use their preference and to move fluidly between orientations (but give the player a way to lock it to their preference, sometimes acceleration of the person (as in a car or on a bus) can cause the device to think its in an orientation that its not.) This takes effort of both code and art/layout -- I think the cost is worth it in the long-run, but I also don't think its a necessary feature for v1.0 (just don't paint yourself into a corner by hard-coding the orientation all over the place). I also don't think people are too upset by games that simply choose an orientation arbitrarily (as long as it works), but all things being equal I would choose landscape myself, just because many of the peripheral control devices only work (or strongly-prefer) horizontal layout, but that's only a consideration if your game uses arcade-style controls.
Posted by Ravyne on 26 January 2015 - 04:51 PM
Amen to that. While I wouldn't say the TO NEEDS a particularly powerful CPU with his modest use case, mobile CPUs are at least 50% slower than mainstream desktop CPUs, at least in multithreaded workloads. Intel Mobile CPUs are still dualcores at max, which makes them comparable to the i3 desktop CPUs... with frequency differences and different Cache sizes, a mobile i7 might be a little bit faster than the fastest desktop i3 (IDK, I didn't really compare them that much).... but it will be at least 50% slower than a desktop (mainstream) i7.
Not true. I've got a quad-core with hyperthreading in my laptop, and its the same 4th generation i7 architecture as in my Desktop. It doesn't spin up quite as fast in turbo, has a slower base clock rate, and it spends more of its time at slower speeds to keep the power consumption and heat generation down, but its basically identical otherwise ( I think the desktop version has twice the cache, too). A laptop CPU isn't going to beat the best desktop CPU in sustained performance, no, but its a closer fight than you might expect. My laptop keeps pretty close pace with the top-end quad-core i5, in multithreaded workloads, as hyperthreading in my laptop makes up for the clockspeed advantage on the desktop, and cache numbers are pretty comparable. Its maybe 20%-25% slower than my 4770k desktop CPU, which is nearly as good as you can get without going to Socket 2011.
And my laptop CPU is two notches down from the best you can buy. If I were willing to drop another $500 or so I could have had 400Mhz better base/turbo, and doubled my cache. That would very nearly close the gap with my Desktop CPU.
Posted by Ravyne on 26 January 2015 - 04:04 PM
So, its hard to say without seeing things in motion or playtesting, but based on your screenshot, it might be the case that its no apparent when the player is going to fall and take damage. In your shot I see what looks like a very slightly raisted platform (the white tiles), and grass, but along the right and top edge I see no visual indication that there's a height change from tile to tile. If greater disparities in height go similarly without visual indication, then I wouldn't expect to take damage either, and I'd be upset if I did.
On second glance, those blue pillars are floating, aren't they? and you mean that damage is taken when you fall from a blue pilar to the ground, which is either grass or white tile here -- is that right? I'd say that here you have non-clear visual communication about the height differences, owing to using a mixed-perspective. The ground appears to be in traditional 3/4ths overhead perspective, but the floating pillars appear to be in something closer to a traditional platforming side-view (as indicated by the shadows being in a straight row. And also if your player character has just fallen from the blue pilar above, why has he (she?) not landed next to its shadow?
I would guess that players are responding to the lack of visual cues and mixed messages regarding viewing angles.
Posted by Ravyne on 26 January 2015 - 03:26 PM
It doesn't seem problematic in concept, however it is out of step with 2D tradition -- I can't think of a 2D game with fall-damage, traditionally you have only taken damage by falling onto/into an environmental hazzard, such as spikes or a pit, or simply die when falling from a very great height. It sounds like people are put off by this being something unexpected, or perhaps the mechanic or its specific implementation doesn't fit. Is your game in other ways a traditional 2D game (platformer?), but takes this one fairly unexpected departure?
Posted by Ravyne on 22 January 2015 - 02:37 PM
Actually i have gone beyond the business plan stage...
I mean with your idea you haven't got the funds to fully develop and mass-market it. So with the aim of developing further in versions/sequels, you release a watered-down version (so the idea is out), which means the rich guys (big companies) with the huge marketing resources see the idea and your toil becomes their gain
So don't make a watered-down version of your game if you're that worried about it. I think this is actually not as much a threat as you make it out to be, but in any case trying to make a big splash against the AAAs by playing their own game is very much a case of your reach exceeding your grasp -- unless you can get someone to give you enough money to play that game effectively.
But you're falling into a false dichotomy, because you don't have to go big-budget or go low-budget. You could make a different game, and thereby build experience and perhaps earn a nestegg with which to take on a bigger idea.
These days cool ideas comes about ( in addition to intelligence and creativity) by: accidentally, experimentally or both, otherwise it will be obvious and would have been done.
If you don't agree you are saying the rest of the world is not as smart (or maybe dumb) - in other words you are saying "i can easily see what other cannot see"
Think of all the thoughts you've had and never executed on -- and not just executed because it was a bad idea, but not executed because life stands in your way, or because watching your favorite television show is just too important to you to give up in persuit of an idea. You have more ideas, I'm sure, than you would ever get to in 10 lifetimes. Everyone is like this. Follow-through is far more important than an idea, and its far rarer. Many good ideas will have been thought of already (at least in broad strokes), and some ideas do have a kind of latent value, but no idea has extant value until its made real and shared.
Posted by Ravyne on 22 January 2015 - 02:17 PM
Can you switch from Express to VS2013 Community? With the Express SKUs a decisions was made to not support graphics diagnostics for developers of desktop apps, but VS2013 Community supports all forms of development and all the tools -- Community is essentially Pro with some license restrictions to prevent larger commercial entities and enterprises from using it. You should check the terms for yourself, but in essence Community is free for evaluation, open-source contribution, and for teams of up to 5, and yes, you can use it commercially if you fit under those restrictions. You *can't* use it if you're a development shop larger than 5 people, or are part of an "enterprise" as Microsoft defines it (I forget exactly how its defined, but its either one of making millions of dollars per year, or of having a certain large number (hundreds?) of employees.)
Community, as a pro-derived SKU, also supports the rest of the advanced features that weren't necessarily in Express, and also supports plugins, which the express SKUs don't. If you're a Unity3D user, this means you can use Visual Studio Tools for Unity (formerly UnityVS) entirely free now.