Jump to content

  • Log In with Google      Sign In   
  • Create Account

Ravyne

Member Since 26 Feb 2007
Offline Last Active Yesterday, 06:28 PM

#5288831 what is meant by Gameplay?

Posted by Ravyne on 26 April 2016 - 04:05 PM

>> As a rule, scripting languages don't surpass "real" programming languages when doing the same work and with both languages free to elect their own optimized solutions;

 

theoretically, they can't - can they?

 

In theory, if compiled, its possible, but only in as much as its possible for one "real" language to be faster/slower than another "real" language. In practice, certain real languages have performance advantages over others by virtue of language design decisions, rather than, say, library implementation details.

 

In practice, even scripting languages that are compiled -- and lets ignore byte-code compiled scripting languages, because those cannot beat a highly-developed "real" language -- to native machine code either have not, or cannot, implement certain kinds of deep optimization techniques -- either because they are too difficult, too costly in terms of compile-time, or are essentially impossible to achieve in a scripting solution that supports hot-loading of code or are not practical/possible to achieve across the run-time/marshalling boundary. Marshaling itself is another speedbump where the scripting language's primitive types are not the host-language's primitive types, and especially where neither language's primitive types are the machine's primitive types (think CLR or JVM primitive ints, who's guaranteed behavior under shifts bigger than the machine word is one thing (IIRC), but the equivalent machine operation differs on each of ARM, x86, and x64). Still more, I'm not aware of any scripting language that exposes things like explicit memory layout of structs to the programmer, which is essential in optimization techniques such as Data-Oriented Design, but systems programming languages do.

 

In all, practically speaking, no scripting language will ever surpass a systems-programming language like C or C++ or Rust -- if one did, it would mean that they had achieved a breakthrough in compiler technology (and C and C++ compilers are already state-of-the-art, so no small feat). On the other hand, there are slower, natively-compiled languages, like say Swift, which at least currently is maybe half the speed of C or C++, and its possible that a highly-developed, compiled (maybe even bytecode) scripting language could best that. On the other, other hand, I'd say that Swift ought to be (and will become) faster than it is today, and even highly-developed scripting languages are likely to hit a performance ceiling before any "real" language will.




#5288827 Returning by value is inevitable?

Posted by Ravyne on 26 April 2016 - 03:36 PM

Let it also be said that in modern times, a vector class of 3 or 4 elements isn't actually a terribly useful thing -- You have access to some kind of vector instruction set that's at least 4-wide on any modern application processor and so you should most of the time be using those compiler intrinsics directly along with the vector-type supplied by your compiler, and enabling appropriate compiler flags and calling conventions.

 

If you intend to do that, and you should, then your vector classes end up being a thin wrapper over these intrinsic functions 90% of the time; a wrapper that could obfuscate optimization opportunities from the compiler if you are not careful. 

 

A vector class can be a useful thing if its your mechanism for providing a non-SIMD fallback or alternative implementations for different vector ISAs -- but other approaches are also viable: conditially-included code (#ifdefs), selecting different source files through target-specific build targets, etc. I suppose you might also elect to use a vector class if your aim is to leverage expression templates to enable vector operator overloads yet still generate code equivalent to the intrinsics, but thats fairly advanced, finicky, and can be brittle.

 

 

A matrix class is a more useful thing since matrix operations don't have intrinsics (interestingly, the dreamcast had a 4x4 matrix-matrix multiply instruction, though it had latency equivalent to 4 4-element vector-vector operations), and it provides a good home for bulk-wise matrix-vector (and matrix-point) transformations.




#5288817 OOP and DOD

Posted by Ravyne on 26 April 2016 - 02:54 PM

To reiterate again, OOP is not at odds with DOD. Its entirely possible, likely even, that your DOD code might employ classes, inheritence, even virtual functions in some capacity, even if not in the exact same capacity that a DOD-ignorant OOP progam would. Separately, some parts of your code -- the most computationally-intensive parts, usually -- will benefit most from DOD (and often lend themselves fairly naturally), and other parts of your code will not benefit and perhaps fit a DOD-ignorant, OOP style more naturally. In programming, we don't choose one approach and apply it to the entire program. Its natural and common that some parts of a program will appear more like OOP, functional, or procedural -- we as programmers are left to choose the best approach taking into account the requirements, what language features are available to us, and how we intend to weld these parts together. DOD exists on a separate axis, and can be freely mixed into different parts of the program as needed, regardless of the programming paradigm we'll leverage. Choosing to approach some part of the program from a DOD mindset does have an impact on how you utilize those paradigms, but you tend to think of it as just another requirement that has to be balanced -- it doesn't come crashing through the wall demanding that you can no longer use such-and-such language feature, or that you have to use such-and-such other feature.

 

 

By way of example, take the typical OOP approach of having a particle class -- position, mass, velocity, color, lifetime, etc -- and having a collection of those particles -- basically as Frob described earlier. If you were mistaken about DOD and assumed it merely meant "looks like procedural" you could separate the data into C-style structs, and have free functions to operate on them, but that won't be DOD because it didn't rearrange the data, it just rearranged the source code.

 

A more DOD approach, would be to transmute the multitude of particle objects, represented by the Particle class, into a Particles class that owns all the particles -- now you have arrays (vectors, more likely, but contiguous and homogeneous in any case) of data -- postions[], masses[], velocities[]. colors[], lifetimes[], etc[] -- now, you've re-arranged the data, but you'll notice that this Particles thing still lends itself very well to being a class -- there's not a C-style struct in sight, and you're using std::vector, and you might inherit ExtraCoolParticles from Particles, and you might use a virtual function to dispatch the Update method (its true that DOD prefers to avoid virtual dispatch, particularly in tight loops, but its still sometimes the right tool at higher levels of control).

 

Moreover, you might notice that mass and velocity are almost always accessed near to one another, and the same for color and lifetime; it could be the case that a better arrangement of the data still would be positions[], masses_and_velocities[], colors_and_lifetimes[], etcs[]. Only profiling will tell you whether this is *really* the better arrangement, but its possible. One element of DOD is separating hot data from cold (that is, frequently-accessed from infrequently-accessed) which is essentially always a win because it leverages caches and pre-fetching better, and another element is to consider grouping desperate elements that are frequently accessed together which is sometimes a win, and sometimes not -- but neither of these say anything about what programming paradigm is employed; its a distinct consideration.




#5288804 Returning by value is inevitable?

Posted by Ravyne on 26 April 2016 - 01:37 PM

In general, I find (and I think this is generally accepted) that this kind of function signature is best --

 

Vector add(Vector vector lhs, const Vector& rhs)
{
    return lhs += rhs;
}

 

Basically, you pass the first parameter as non-const by value, and then use it to return the result. The return-value optimization nicely removes any overheaad. Another important advantage is that, for operations with self-assigning equivalents ("+" and "+=", "-" and "-=" and so on) you can use this pattern to implement the non-self-assigning version in terms of the self-assigning version; this means that you only have to maintain one implementation of the formula, and also that only the self-assigning version needs access to the class internals -- you can (and should) implement the non-self-modifying function as a non-member, non-friend function within the same namespace as the class.

 

The cross-product, because it would normally reference variables from "lhs" that will have been overwritten (and also because a self-assigning version is uncommon) is a bit of a special case that doesn't lend itself to this pattern ideally. You can repeat this pattern and store some elements off to the side in locals as needed, or you can pass both parameters in by const reference, using a local non-static value to hold and return the results as Juliean suggests. Either method will leverage RVO to eliminate extraneous copies.




#5288682 OOP and DOD

Posted by Ravyne on 25 April 2016 - 05:57 PM

 

In my opinion, and I'm assuming we're talking about high performance software development and C++ (since you've tagged the thread with this language), use DOD whenever possible ...

 

Let me expand a bit on this -- DOD is really the art of wringing utmost performance from a set of hardware that has specific, real-world characteristics -- machines have a word-size, a cache-line size, multiple levels of cache all with different characteristics and sizes, it has main memory, disk drives, and network interfaces all of which have specific bandwidths and latencies measurable in real, wall-clock time. Furthermore it has an MMU, and DMA engines, and it has peripheral devices that require or prefer that memory objects used to communicate with it appear in a certain format (e.g. compressed textures, encoded audio). Because of the already large -- and still growing -- disparity between memory access speed and CPU instruction throughput, it has been a lesser-known truth for some time that memory-access patterns, not CPU throughput or algorithmic complexity, is the first-order consideration for writing performant programs. No fast CPU or clever algorithm can make up for poor memory access patterns on today's machines (this was not the case earlier in computing history when the disparity between memory access speeds and CPU throughput was not so mismatched; I would estimate it has been the case since around the time of the original Pentium CPU, but hadn't become visible to more mainstream programmers until probably 10 years ago, or less).

 

If performance is critical, DOD is the only reasonable starting point today. Period. End of Story.

 

But one must have a reasonable grasp of where performance is critical -- it would be unwise to program every part of your program at every level as if DOD is necessary or desirable in the same way that writing the entirety of your program in Assembly language would be -- in theory, you might end up with the most efficient program possible, but in practice you'll have put an order of magnitude more effort into a lot of code that never needed that level of attention to do an adequate job, and you'll have obfuscated solutions to problems where other methods lend themselves naturally. For instance, UI components would gain nothing by adopting DOD, yet a DOD solution would likely give up OOP approaches that fit the problem so naturally that UI widgets are one of the canonical example-fodder used when teaching OOP.

 

 

 

... and OOP when forced to because (even though I'm not sure if DOD has been formally and completely defined) what comes to mind technically when thinking of it is that it help us to tackle a couple of problems with OOP:

1. Inheritance abuse (including CPU costs of virtual function calls although generally that is an optimization).

2. Cache wastage through composition abuse and inheritance.

3. Destructors, constructors, member functions, member operator overloading, etc. leading more functional code writing instead of OOP.

Technically, as been stated before, the main result that you get from this is more POD and less objects, sometimes automagically achieving a better memory usage. Ultimately, you want to balance these things so that your only reason to use the (few) advantages of OOP is convenience.

 

Yet, its important to maintain awareness that OOP and DOD are not necessarily at odds. You can't, for example, answer the question "what's DOD?" with "Not OOP." Whatever programming paradigm(s) you choose to adopt, its prudent to select and leverage what features it can offer in service of DOD, for the parts of your program that adopt DOD. It might not be possible to write a DOD solution that looks exactly like a typical OOP solution, but its very possible to write a DOD solution that looks *more like* a typical OOP solution than like a typical Procedural solution. Again, DOD is (and must be) prime where you have deemed performance to be critical, but there are no language features or programming paradigms that it forbids; like all things in engineering, there must always be a considered balance of competing needs.




#5288668 what is meant by Gameplay?

Posted by Ravyne on 25 April 2016 - 03:58 PM

For what its worth, usually when scripting languages -- even compiled ones -- make claims of being as-fast or faster than C or C++ or whatever language they might typically be embedded into its usually dubious. They compare features that the scripting language has meticulously optimized for against naive implementations in the language they're comparing against, or they're comparing library functions that might be used in similar situations in either language but that do wildly-different amounts or kinds of work underneath. As a rule, scripting languages don't surpass "real" programming languages when doing the same work and with both languages free to elect their own optimized solutions; its uncommon even for a scripting language to match a "real" language in performance under these conditions. That goes doubly so when the language they're comparing to is a "bare-metal" language like C, C++, Rust, or others.

 

I'm not correcting this misconception as an academic argument. I'm correcting it because its common to fall for the siren-song of performance when selecting a scripting language. While it may be sometimes convenient that you gain the flexibility of choosing to implement a particular bit of performance-intensive code in your scripting language (either because it saves dropping into a harder-to-use language, or because it avoids crossing run-time/marshaling boundaries), it is most often a better idea to implement that functionality in the language of your engine and expose it to scripts as a service. Thus, if you overvalue this ability in a scripting solution, you might be compelled to give up ground in features that are far more important in a scripting language, such as productivity, ease-of-integration, how widely used it is, or whether its programming model supports the kinds of interactions you need to model in your game-play without creating a lot of that infrastructure yourself.

 

TL;DR; Performance is rarely a noteworthy consideration for things you should consider scripting to begin with. If you've chosen a scripting language with performance as your primary concern, you've probably traded away more worthwhile features to get it.




#5288658 OOP and DOD

Posted by Ravyne on 25 April 2016 - 02:44 PM

As others have pointed out, what you're calling DOD is more akin to procedural-style programming, as typical of C code. You can do OOP in C even, you just don't have convenient tools built into the language for doing so. Likewise, you can do actual DOD using OOP techniques or procedural techniques, or functional or other techniques as well.

 

When we talk about Object-Oriented, Procedural, Functional, Declarative (and more) styles of programming, we typically call those programming paradigms -- a language that is designed to fit one (or maybe blend a few) of those paradigms typically has language-level features and makes language and library design decisions that support and encourage programmers in leveraging a certain mindset when expressing their solutions at the level of source code.

 

As of yet, I'm not aware of any language that adopts Data-Oriented Design in the way that, say C++, adopts Object-Oriented Design, and I (and most people, I would assume) tend to think of DOD existing on a separate plane that's mostly orthogonal to the plane where OOP, Procedural, and other programming paradigms exist. This is because actual DOD isn't really about how a programmer maps their solutions to source code, its really about how their solution maps to realities of hardware with an emphasis on what data belong physically-together and how it flows through the program logic. DOD says that this mapping from solution to real hardware is more important than the mapping from a programmer's solution to source code -- thus, in DOD, hardware realities drive the solution, and the solution drives the source code. This is the reverse of the typical approach, where programmers do not typically deeply consider the realities of hardware (indeed, some schools of programming actively discourage such considerations) or, if they do, attempts to retrofit hardware considerations as optimizations after the program structure, according to whichever programming paradigm, is already crystallized and difficult to fundamentally change. DOD has to be considered from the start since it dictates how your data will be organized and how it will flow, at least for the processing-intesive parts of your program that will benefit from it; DOD can't be an afterthought.

 

On OOP, one of the troubles is that what's taught as "OOP" in books and in college classrooms tends to be a very shallow and dogmatic view of it. Most colleges today teach OOP using Java which as a language is particularly dogmatic (there are many reasonable choices which the language simply disallows a programmer to make because the language designers deemed their one-true-way as automatically superior), and not to mention needlessly verbose because of it. Thus, Java is all of OOP many people know when they exit college, and they go on to program in C# or C++ or other "OOP" languages as if they were Java.

 

Java has no free-standing functions, Java has no operator overloading, Java is garbage-collected, Java is intrinsically independent of any real hardware by creating a fictional homogeneous virtual hardware platform.

 

Java made choices largely opposite of C++ even though they look superficially similar. C++ has free-standing functions, supports operator overloading, is not garbage collected (or even reference counted by default), and does not make itself independent of real hardware, but defines where those differences may appear explicitly (simultaneously discouraging, but allowing, reliance on such platform-specific behavior). These are just a few examples, and both languages have their place, but it should come as no surprise that programming either one as if its the other, where its even possible, does a disservice to the program. Its like trying to speak Spanish by mixing Spanish words with English rules for grammar -- you might be able to communicate your ideas in the end, but you sound like an idiot and everyone wonders why you seem so overconfident of your ability to speak Spanish.

 

In C++, for example, its good practice for a class to be as small as possible, containing only the member variables necessary and only the member functions that must be able to manipulate those member variables directly. What's more, in C++, free-standing functions inside the same namespace as a class, if they operate on that class, are every bit as much a part of that class's interface as member functions are because of how C++ name-lookup and overload resolution work (see: Koenig Lookup). In Java-style OOP, this cannot be because the language says that every function must be part of a class -- and as a result every function can manipulate member variables directly even if it doesn't need to (Java's approach is worse for encapsulation, and makes testing more difficult in the same kind of way that global state does). This one difference makes good, idiomatic program design fundamentally different between the two languages -- all of this is kind of a long way of saying that even within "OOP", there are different, competing flavors that dominate in one language or the other. Finally, while I have no love for Java, I do not mean to leave you with the impression that C++-style OOP is the best style of OOP -- C++ happens to be a particularly popular and mostly-good blending of OOP with control over low-level hardware concerns which, combined with is mostly-C-compatible roots, has made it very attractive for game development and other computationally-intensive domains where efficient hardware utilization pays dividends -- C++ is not even a "pure" form of OOP, and many computer scientists argue that languages like Simula (the first OOP language) and smalltalk (another very early OOP language influenced by Simula) have never been surpassed as examples of the OOP programming paradigm. 

 

 

In the end, the best programs tend to balance pragmatism with just-enough looking-forward. Programs that see the light of day tend to do only what they need, without caring overmuch about how pretty or fast or ideologically-pure they are. At the same time they avoid painting themselves into a corner -- too much specialization too soon, in the wrong places, or without good reason often ends up as wasted effort when it proves inflexible in the face of necessary changes later on. There isn't a formula for this balance, its something you gain a feel for through experience and to a lesser extent by learning from others who are experienced. Its the art of knowing when "better" has become "good enough", and accepting that after this point, "better still" is rarely a justification unto itself. Its accepting and even embracing that we will never know more about a problem now than we will know in the future, and not making big bets on unknowns, for or against (as a side-note, this is not at odds with DOD, since hardware details are known and immutable).




#5288253 MSVC 2015 constexpr and static assertion

Posted by Ravyne on 23 April 2016 - 12:45 AM

Its not just consexpr limitations on the function that causes problems here. static_assert expects the first parameter to be a boolean constant expression (to include any expression that can be statically-known at compile-time, not just constexpr) -- in the case that function f would be evaluated at runtime, what is the compiler to do with that static_assert? If f is not constexpr, then x was not known at compile time, and still the code tries to feed it to static_assert which needs it to be const. The two cannot co-exist in this way.

 

What you probably want, is for the static_assert to be dropped out of runtime-evaluated constexpr functions, but that's not how it works.




#5288230 Optimizing Generation

Posted by Ravyne on 22 April 2016 - 07:31 PM

What I would actually recommend is combining those first two 3-axis loops. There's a few things you're doing here that seem strange.

 

Firstly, in your first loop (x, y, z) you test for null objects before assigning each index a new BlockEmpty() -- since you just allocated 'blocks' they should all be null. Since you don't ever revisit any array element, you always test, always find null, and always assign a new BlockEmpty().

 

Secondly, now in your second loop (tx, ty, tz), you've already gotten good advice to eliminate the sqrt(), and to hoise out the repeated expression -- and you've probably converted to a switch statement -- if you haven't, you should. The inefficiency here is that you're replacing a lot of those BlockEmpty()'s you just assigned into 'blocks' with these new BlockCore()s, BlockBedrock()s, and BlockStone()s -- BlockEmpty()'s that get replaced are never used, they just take time to created, and leave pressure on the garbage collector when you replace them.

 

So, by combining those two loops and making BlockEmpty() the default case, you can avoid creating any extraneous Blocks currently in those first two loops, plus the code will be simpler.

 

 

taking that even further, you overwrite some of those blocks later with cracks, caves, and ore. A possible further refinement would be to place empty (due to crack or cave) and ore blocks into 'blocks' array first (above the loop I recommend combining) and then add a test in the combined-loop just around the switch so that new blocks (from the switch) are only placed where the array element is currently null (that is, not already made a crack, cave, or ore).

 

Allocating instances of blocks and also iterating over the blocks array is probably the most expensive things you're doing here I would hazard a guess (surely now that the sqrts are eliminated), so reducing those as best you can is probably going to give best results.

 

All that said, if its working at acceptable speed now, its not always a good thing to keep rat-holing yourself over optimizing this bit of code.




#5288225 Optimizing Generation

Posted by Ravyne on 22 April 2016 - 07:03 PM

Also, you'll probably want to invert your for loop order, and do z, y, x, it's more memory friendly.  If you are wondering why, imagine if you had a single array that was of length 16*16*16, and think of how you're jumping around in it as you travel through your innermost for loops.

 

This is always a good place to look, but OP's code seems consitent in that X is both the outside loop and the most-significant ordinal (array indexer?). In other words, the X and Z "labels" are consistently swapped, but they don't seem to be mismatched in the way that usually causes the cache to be thrashed. The speedup seen was likely entirely due to getting rid of sqrt() -- or I'm not reading the code with the comprehension I think I am :)




#5288216 Should you load assets/resources at runtime or compiletime?

Posted by Ravyne on 22 April 2016 - 05:43 PM

 

For small games I'd recommend embedding it into the executable because it allows a smaller package.

 

 

 
A resource doesn't magically shrink because you embed it into an executable.

 

This is probably not what WoopsASword meant, but its worth mentioning that packing very small resources, such as small, low-color sprites, into a file together can actually reduce the size of your installation on disk. Prior to 2009, 512 byte disk sectors were the standard so a 16x16 pixel, 8-bit sprite would consume a whole disk sector even though it only needed 256 bytes, for 50% wastage -- you couldn't make the physical file any smaller, but you could have stored another sprite inside "for free". After 2009, disk manufacturers started migrating to even larger, 4KB sectors and this was the majority of disks starting in about 2011; this would result in about 94% wastage (you could store up to 15 additional sprites "for free"). Of course, 16x16, 8bit sprites are not so common today, but a 32x32, 16b color sprite gets us right back where we started with 50% wastage, or 32x32, 8-bit sprites waste 75%.

 

The flash in SSD drives is (exclusively, as far as I know) 4KB sectors physically in silicon, so 4k or larger logically; and these 4K physical sectors are the unit of write-cycle endurance as well, so its extra considerate of SSD users to fully utilize each sector.

 

If you have lots of individual files that are smaller than 4k you really should consider packing them together to eliminate wastage, such as by packing sprites into a sprite sheet or simply flat-file. I mention this specifically since its a relevant consideration for 2D sprite games (lost of small images that aren't compressed) -- of course if you have larger textures/images and especially ones that compress well with acceptable quality, standard compression will do you fine with minimal wastage.

 

I still would not pack those kinds of files into the executable itself (better to pack them together in files/units that make sense), but it would achieve having less wastage all the same.




#5288063 Going multi-threaded | Batches and Jobs

Posted by Ravyne on 21 April 2016 - 06:34 PM

What are 'Batches' and 'Jobs' when generally speaking about Thread Pooling?

What are the best way to identify a function call as a 'Batch' or either as a 'Job'?
How would one go about creating a multi-thread system?

 

Skimming Sean's article, it looks to me to break down like this -- a batch is some portion of work as determined by a subsystem (e.g. a physics engine, rendering, resource-loaded) where different sub-systems have different needs. He uses the example of a physics system where it creates a batch for each "island" of physics objects (which is further explained to mean "nearby physics objects which potentially interact with one another, but not with other physics objects in that exist elsewhere"), because these "islands" are independent, you can actually create a batch out of each "island" and run them simultaneously on different processor cores since they don't interact with one another, giving potentially higher CPU utilization. And because the big-O cost of physics calculations usually square with the number of bodies in consideration, its also more efficient to have more smaller batches than fewer larger ones (e.g. 32+32+32 < 52+42 < 92). For other kinds of work, other batching strategies (or no particular strategy) might give best results. Rendering might create batches that use the same materials (textures+shaders+etc), a resource loader probably does one asset per "batch". A job seems to be an individual work submission -- a job is the result of batching.

 

As for creating the system itself, Sean posted a bunch of good links. The basic idea of a thread-pool, which seems to be the universally-preferred system today, is that you allocate a certain number of threads statically using OS mechanisms, based on the number of CPU cores (and hyperthreads) you find available. Your game logic puts jobs into a queue, and there's some mechanism that runs (could be a dedicated thread, could be a periodic or event-driven system) that moves work off the queue and onto on of those threads you allocated. There's a lot of detail I'm glossing over about what a job looks like in terms of an interface, but you can think of a job as some kind of class with access to all the operations and information that are needed to do the work, and some kind of "DoWork" method that kicks things off once it lands in one of those threads. You probably also need a way to return the results and signal when a job is done, this could be through a decoupled messaging system (like signals/slots) or a result queue -- you need to drop the result into some other delivery mechanism to free the thread as soon as possible.




#5287682 Should you load assets/resources at runtime or compiletime?

Posted by Ravyne on 19 April 2016 - 04:04 PM

I don't see how "compile time loading" could be any faster, you might be confusing this with the fact that you're paying the cost of loading the resource when the program is loaded, rather than paying it when you call LoadAsset(...) or somesuch to load the resource from a file at runtime. In other words, its not faster, you've just failed to measure the compile time scenario at all.

 

Josh gave a nice overview of pros/cons, and suitable use-cases. You definitely don't want to "compile time load" all your assets, least of all on the false premise that its somehow faster. Particularly, the down-side is that whatever is in the data segment is in memory whenever your program is -- that means if you have 4GB of assets, they're all in memory even if you're only using 400MB of them in the current level or scene, and your minimum requirements will reflect that. Now, with virtual memory that's not the whole story and the OS will jump through hoops making your game work, but--and here's the point--had you loaded those assets at runtime then you only will have in memory exactly what you need in a given level or scene; and what's more, you gain the flexibility of loading lower-fidelity versions (e.g. smaller mip-levels) of assets if you needed to get your memory footprint down even smaller, or the reverse to load higher-fidelity versions for users with tons of memory.

 

The great majority of your assets should be loaded at runtime. Personally, I would only consider compile-time-loading assets which, if missing, would mean that the engine, not the gamewould be unable to continuing to function as designed. Even at that, I would strongly consider loading them at run-time, as soon as possible, rather than embedding them in the executable just because it still affords greater flexibility.




#5285933 Why learn STL library

Posted by Ravyne on 08 April 2016 - 06:10 PM

You should learn it because its The Right Way* to write code in C++ -- You should write std::vector almost every time you want dynamic memory, you should write std::unique_ptr every time you have a pointer who's contents are owned by a single entity, you should be using the standard algorithms wherever you can and not writing raw loops, you should be using std::move, and countless other examples.

 

I would argue that, without the standard library, it can hardly be said you're writing C++ at all. Yes, just "the syntax" is C++, but that's a bit like saying the collection of English words are English. While technically true in a legalistic sense, a language -- whether computer or human -- is more than its atoms. Its about how the parts fit together, and what patterns have emerged to deal with common situations. Most programmers, especially new or young programmers, without utilizing the standard library, just end up writing C++-flavored nonsense. You can use a library that's not the standard library, such as would likely be instituted by your company or engineering organization, but there better be a good reason for avoiding the standard library, and there better be a rock-solid implementation of its alternative.

 

 

* The C++ Standard Library / STL is not perfect, its got some warts and imperfections and accumulated cruft, but by and large its great and there's a lot of good, useful stuff inside. When it offers a solution to your problem, or a bunch of parts that can be wired together to solve your problem, it really ought to be the first thing you reach for. For the general case, I don't believe there's a more performant and battle-hardened library on the planet. Its true that there are those with special needs who might avoid the standard library because they can't use exceptions in their environment, or that they might be able to devise special ways of doing things that are faster than the standard library because they understand their exact use-case better (e.g. which corners they can cut) and so their solution actually has to do less work. Bit its a rare thing that anyone comes up with something that's faster than the standard library functions while also maintaining the same level of general-purpose correctness.

 

Don't make the mistake of assuming, as most starting game developers do, that the standard library is "too slow" or just plain to be avoided. Don't make the mistake of assuming, as most starting game developers do, that "real (game) programmers" write every line of code they've ever laid eyes on. Don't make the mistake of assuming, as most starting game developers do, that using such a pedestrian library doesn't live up to the game-developers-as-programming-gods myth, and if you do then you'll never be elite. Don't make the mistake of assuming, as most starting game developers do, that you know what needs to be optimized before you've measured it with real tools that you really know how to use.




#5285492 How would you go about developing a game console OS?

Posted by Ravyne on 06 April 2016 - 03:23 PM

You're more or less talking about spinning a whole new FreeBSD distribution, which is complicated but clearly has been done by many parties. The scope of that discussion is far too large to be had here.

 

Now, without being unduly discouraging, your questions worry me because you seem to be concentrating on superficial elements of this would-be operating system. How to play a video during startup, tamper checking, one specific boot-loader operation. These might seem like trivial and or fundamental questions, but by the time you can display a video and sound you'll have already had to have booted enough of an operating system to give you near-complete functionality anyways -- You'll need at least a basic sort of kernel, capable of initializing memory and IO, talking to a filesystem, can load a driver module (or contain built-in drivers) for video and audio, handles input -- probably another driver, and is capable of fast interrupt handling for being able to fill/move those audio buffers with super-low latency if you want to avoid popping and other audio artifacts (because our ears are so sensitive to aberrations, latency is even more important in audio playback than in video).

 

And that's just to get something to boot -- not something optimized for running games. I'm certain that Sony's FreeBSD-based Orbis OS is highly streamlined for gaming workloads and has a unique interface to graphics and audio that, I would guess, minimizes dependence on the kernel and on "traditional" drivers as such. Without such care and attention, whatever you might produce from a collection of standard *BSD components will be just that, a slightly slimmer BSD that performs in no significantly different way.






PARTNERS