Jump to content

  • Log In with Google      Sign In   
  • Create Account

Ravyne

Member Since 26 Feb 2007
Offline Last Active Yesterday, 07:45 PM

#5216949 2D vs 3D

Posted by Ravyne on 16 March 2015 - 04:36 PM

2D really simplifies everything, but not to the point of transforming it entirely -- in short, working in 2D is a great playground for learning to work in 3D later. Take physics, for example, 2D basically means you have only 3 degrees of freedom (translation along x, along y, and rotation in the X-Y plane) to deal with, rather than 6 (X, Y, Z, Roll, Pitch, Yaw) -- everything is basically the same, its just less of it and the simplifying assumptions you can make.

 

You probably are better off starting with 2D to get your bearings if you're new. One of the most difficult parts of making your first complex game is figuring out how all the parts fit together and interact -- and its almost entirely the same whether 2D or 3D, but you won't have the 3D details holding you up if you start with 2D.

 

But I guess that really only goes if you want to write those systems yourself. If you're going to use Unreal Engine or Unity or other engines/middleware, you'd probably be fine to choose either if you're very confident that your math skills and reasoning skills are up to the task. Just keep in mind that basically everything is twice as difficult or more in 3D than in 2D.




#5216356 Array of structs vs struct of arrays, and cache friendliness

Posted by Ravyne on 13 March 2015 - 05:41 PM


It's SIMD 4 register friendly, but not very cache friendly. a tuple of 4 vectors is 24byte in size, thus sometimes the x y z components are crossing cache line borders, and in that case you could just as good use pure SoA.

 

That you cross a cache line border doesn't really matter as long as you're accessing the data in a linear fashion. The memory that maps to the next cache line doesn't increase contention on the cache line before it. You may stall mid-operation waiting to read in that next cache line, but you were going to have to read it anyways, and after a few dozen or hundred sequentially-read cache lines, the pre-fecther is going to grab it for you ahead of time anyways -- all it might cost you is that the final cache line goes partially unused.

 

It all really comes down to this -- cache-friendly means having all the data you need when you need it, with as little other data in the way as possible. If you have a struct containing a 3D position and a 52bytes of other data, then your position-updating code can only update one position per cache line read in. If you don't also do work on those other 52 bytes, but you need to later, then you have to load that cache line again, and you pay that high cost twice. If you can't use that other cache line data right away, then it makes sense to restructure your data such that positions are in their own contiguous array, where you can process 5.3 positions per cache-line (and in the same amount of time as before to boot, given that cache-line reads are the bottleneck).

 

It can't be overstated how important cache utilization is. Take your system memory bandwidth and divide it by 64 bytes -- that's maximum number of cache lines you can load per second. Not one more. A typical system bandwidth is 25.6 GB/s, which translates to just 400 million cache lines per second. If you read a pathological worst-case of just one bye from each cache line, that's just 400 megabytes per second -- even if you read 12 bytes of position from each cache line, that's just 4.8 GB/s out of 25.6, a mere shadow of the machine's potential. This resource of cache-line reads per second is *way* more critical than how many operations your CPU can do per second, because it gates everything that doesn't operate entirely within the caches -- CPU cycles and even in-cache read/writes are almost free in comparison, being on the order of 100x and 10x faster, respectively.

 

And that's all best case, right, because like everything else that runs on a clock in your computer, those unused resources are expiring by the instant -- If you don't read any memory for 12 microseconds, you don't get to save 12 microseconds worth of cache line reads and roll them over to when they're more useful to you, they simply vanish into the ether of time. You use it or you lose it. A program at peak performance would load one new cache line at every opportunity, never load the same line twice, and be entirely done with each cache line by the time that new data needs start forcing things to be ejected (which depends on a number of factors, but assume the window is the size of your L3 cache, so anywhere between 1-6 megabytes in mainstream processors, 12-20 in enthusiast processors like the 6-8 core i7s, or larger in server CPUs)




#5215984 To scale or not to scale?

Posted by Ravyne on 11 March 2015 - 11:27 PM

This has not much to do with the question, but I'm reminded of playing "Black Bass" on the NES when I was young. I had borrowed it from a friend, had no instructions, and had never seen it played. I played for awhile with some small silhouettes of fish swimming beneath my lure, but they would never, ever strike at it. After probably hours, I happened upon a larger fish -- at least 4x as big as the silhouettes, and that fish on occasion would strike but I never could get him hooked. More hours pass, concentrating on trying to catch this one big fish -- and finally I got him hooked! I mashed the button as quickly as I could to reel him into the boat, excited to see how big my catch was...

 

And then as the fish reached the edge of the boat, and ENORMOUS hand reached down and plucked the tiny minnow I had just spent hours catching from the water.

 

 

And that, kids, is why I never played Black Bass again.

 

 

 

There is a lesson to be learned about scale here, however, and that is that scale means nothing absent a well-known reference. Is that fish under the serene blue surface of the lake tiny or enormous? NOT A CLUE! Its bigger than the other fish, what now? STILL NO CLUE! MAYBE THE OTHER FISH ARE JUST SUPER TINY! Look, its at least as big as that lilly-pad, its gotta be big, right? HELL NO! SURE, THEY WANT YOU TO THINK ITS A NORMAL-SIZED LILLY PAD, BUT MAYBE ITS TINY TOO! OR ENORMOUS! NO ONE KNOWS BECAUSE THERE'S NO HANDS, COKE CANS, OR BANANAS IN THE FRAME!




#5215764 Best programming paradigm for noobs?

Posted by Ravyne on 10 March 2015 - 08:40 PM


Personally, I feel that it doesn't matter. If you start programming procedurally, you will write bad procedural code. If you start programming in an OO fashion, you will write bad OO code (and probably bad procedural code too ). Same with functional.
 
When you start programming, you will suck. Same as when you start anything. and the only way to get better is to practice. 

 

That's an important point -- there's no language in which you're going to write good code in right from the start. In turn, this means your goals matter -- if your goal is to be a reasonably compenent programmer who can whip up scripts and little programs for yourself, that's a different goal than if you want to write operating systems or engines for AAA-games. Even still, either goal is pretty far removed from the absolute beginner, so it might be better to divorce the decision of which language or paradigm is best for early learning, from the decision of which language or paradigm that you want to grow into -- although the choice of the latter might influence the choice of the former.




#5214861 NVIDIA NSight vs Visual Studio 2013 Graphics Debugger?

Posted by Ravyne on 05 March 2015 - 06:45 PM

I write the documentation for Graphics Diagnostics, and it isn't really tuned for compute shaders -- it can do it under some circumstances in the current version, but IIRC, only if your compute shader is playing a role in your rendering (e.g. computing local lighting influences as in Dice engine).

 

If you can deal with being tied to one of the vendors, both provide good, free graphics debuggers. The bigger selling point of Graphics Diagnostics is that its vendor-neutral. Its good and getting better but its not an optimal experience form compute shaders right now.




#5214363 Convenient approach to composite pattern in C++

Posted by Ravyne on 03 March 2015 - 10:40 PM

I've been playing with Erik's code a bit, and another candidate is the almost-never overloaded comma operator. I didn't post the code since I'm unfamiliar with troubles that might arise. I suspect its safe, as comma has lowest precedence, and this use is similar to other uses in C++ DSL implementations, or as its used to collect terms in parts of boost, I'm just unfamiliar with potential hazards that might be found.

Usage looks like:

(Listeners, &Listener::InformOfSomething)(1,2,3,4);

Which doesn't read too poorly is you read the first set of parens as something like "cartesean product of this collection and this member function"


#5214335 Convenient approach to composite pattern in C++

Posted by Ravyne on 03 March 2015 - 06:46 PM

Its probably something of an oversight that there's no version of for_each that simply takes a container and a function to apply to all its elements -- it isn't much more inconvenient to simply use the iterator-pair version, or a short range-for form, like "for (auto l in listeners) l->do_stuff(1,2,3); but a version of for_each that doesn't use an iterator pair would eliminate unnecessary verbosity of the most-common use case:

 

Luckily, with non-member begin() and non-member end(), I think (that is to say **disclaimer** no warranty expressed or implied) you can implement this easily in those terms and in the terms of the existing iterator-pair version of for_each. This is not tested, but it would look something like this:

 

template<class Input, class Function>

Function for_each(Input in, Function fn)

{

    return for_each(std::begin(in), std::end(in), fn);

}

 

and usage would look something like so:

 

for_each(listeners, [](Listener &l) {l->InformOfSomething(1, 2, 3, 4);}

 

I think then, that you could reduce out the lambda with a function object template of some kind -- something like a "call_member_with_args" template, who's usage could look something like this:

 

for_each(listners, call(Listener::InformOfSomething)(1,2,3,4));

 

 

But I'll leave implementation as an exercise to the reader. (I think probably you could use std::function, but its not lightweight -- I wonder, though, if its exactly as heavy as an encapsulated member function anyways)

 

 

[EDIT] ... and Erik's C++11 variadic-template-fu is both stronger and quicker than my outdated C++06 template-fu




#5214310 Resource Management

Posted by Ravyne on 03 March 2015 - 04:00 PM


So do you claim that singleton patterns are bad habit? I've seen them used many times while viewing engines' source codes.

 

Other's have already addressed it, and I don't want this thread to descend into yet another long discussion of Singleton's pitfalls like so many others have. But since you asked me, I'll defend my position briefly.

 

The single worst thing that the Singleton pattern does, IMO, is that it allows you to feel good about outright ignoring important and often multitudinous dependencies that really ought to be thought about carefully, or at the very least ought to be obvious. The globalness and alwaysness of Singleton allows this to happen, and its facade of OOP-ness fools you into thinking that this state of affairs is A-Okay. Furthermore, the singleness of Singleton allows you to make the dangerous assumption that there will only ever be one, and so your typical singleton object never stops to consider how it would have to be in order for more than one of them to cohabitate the same program; worse, many people who do not understand this actually favor this trait as a way to allow them to "simplify" their task -- and as a result, they implement singletons when they "only want one" right now, and come to regret it later when they find out they actually need two or more -- Singleton as an up-front design pattern should only be considered (end even then, not always accepted) when two instances *cannot* coexist, ever, by reasons or requirements that are beyond your control. And the kicker is that a simple global can be cajoled into providing all the essential properties of a singleton design for essentially zero implementation cost -- With the kind of Singleton people write to feel like good OOP citizens, you actually have to write a bunch of code to participate in all their pitfalls and downsides. Sheesh!




#5214297 Server for Unity game

Posted by Ravyne on 03 March 2015 - 02:55 PM

I've probably over-recommended it in the past for things that other approaches would be better suited, but I think what you want here is a RESTful API, at least as long as other clients don't need to see changes instantaneously (that is, if your design calls for one player being able to upload a map, and for another player to be able to download it sometime soon, but not instantaneously) -- there are things you can do to hasten the turn-around time, but at its core RESTful interfaces support multi-layer caching -- its one of the reasons the internet scales so well, but also introduces propagation delays where you might see a recently-cached version of the content that's older than what might have just changed on the server. When you request a restful resource (via a URL) the request can be intercepted by, say, a chache your ISP runs, and they might respond with something they have stored for that URL, rather than hitting up the actual server for it again. For content that's static, or infrequently changed, this is good -- otherwise Netflix would have ground the internet to a halt long ago.

 

There are lots of good resources on RESTful interfaces, but the basics of the API are that each URL represents a unique resource (a map, in your case), and there is a standard set of built-in http API methods that you can use to get, create (post), update, and delete these resources. Usually, the primary resource (URL) for the thing you're dealing with isn't really the thing itself, but might respond with a list of sub-resources that make up the thing in the form of URLs that represent each sub-resource, or things you can do to/with the sub-resources (you can think of these as members of a class). So, you would request the primary URL, and in the response you would find another URL to update the map data itself, for instance. I forget the official document that defines The acronym HATEOAS describes how to best-use this approach, and its the basis of Atom/RSS feeds, so its a well-proven concept. Also, I may not be saying it clearly, but REST is an architecture, its not a protocol -- as such there are some differences of opinion regarding details of implementation.

 

Azure and other similar cloud providers are one solution that provides a great deal of integration and value -- it has everything you need out-of-the-box. Having never done so before, I wrote a simple RESTful API on azure in less than a day, including all the setup and familiarizing myself with the basic features of azure. Another approach would be a using a host like Digital Ocean or Linode to host a collection of Docker containers that make up sub-services in your server (one for the database, one for blob-storage, one for the REST API, one for a load balancer, etc), with that, you can fairly easily spin up additional resources as you need them. There are third-party solutions for that, but its not all integrated like Azure is. Anyways, the buzzword today is "Micro-services" so you might want to google that.




#5214000 CRPG Pre-rendered Bacgrounds - Draw depth to Z-Buffer

Posted by Ravyne on 02 March 2015 - 01:31 PM

Seems reasonable to me. You can probably even draw all your opaque objects before rendering in your background, which should give you back most of your hierarchical-Z I would think. You'll need to do transparent objects afterwards, though, if you have any.




#5213777 Resource Management

Posted by Ravyne on 01 March 2015 - 06:50 PM


If you try to abstract a manager you will get back to a specialized class no matter what you do. Resource management needs to be fast as well any other module in the engine, and abstracting you're forced to add complexity in your scheme at some point. A lot of engines out there has some kind of asset manager, but the class is still specialized; each resource does not extend an asset.

 

I nearly started reading it wrong the first time, but this is important. You really do not want to have a base resource type that is required to be inherited from. This means there's tight coupling between resources and manager, and also that you cannot use your manager, then, with any existing class/struct or with any primitive type without first writing a wrapper class for it which inherits the base type. This leads to a lot of unnecessary boilerplate -- better would be for resources and their manager to be loosely coupled, then you can readily place any existing type into a properly-specialized manager, or even primitive types. There's no reason your manager should not readily accept, say, std::string, and there's no reason it should be precluded from accepting, say, int or bool types or C-style file handles (which, IIRC, are just integer identifiers), either. Irlan is right, also, about specialization at some point -- Its my experience also that you want separate managers for separate kinds of things (textures, meshes), rather than one uber-manager that holds all resources. You can (and IMO, should) however allow for a manager to manage an abstract type -- eg. one texture manager instead of texture2Dmanger+texture3Dmanager, etc.

 

Also, many people reach to singleton pattern for such managers, but I think this is wrong-headed. Do not allow your design to assume that only one manager of a single resource type (abstract or concrete) exists -- If you design such that many can co-exist, you can choose to use only one and get all the same traits of a system which only allows for one, but you cannot choose to create many if your design enforces only one.

 

Also, everything people have said about the specificity of "Manager" is correct, and I do not think you've clearly defined what your goals are -- Usually, a resource manager implies either a common-point-of-access, resource deduplication, on-demand resource retirement/restoration (for e.g. streaming), pooled allocation, or some combination of these things. It is my experience that each of these properties is best treated as a separate responsibility, and so, is encapsulated into its own class, and higher-level functionality achieved by orchestrating these classes together, either enforced through a higher-level class, or by policy.




#5213563 leave function definition "empty" (c++) ?

Posted by Ravyne on 28 February 2015 - 04:40 PM

I agree that the broader strokes of this design smell funny, I would encourage OP to consider a different line.

 

But, ignoring that for now and dealing with the problem at hand, I think what'd I'd do is make getSpecialItemName a function pointer, and set it to a default implementation. For a game to override this function, what they would do is create their own function with a compatible signature, and then set the getSpecialItemName function pointer to point to their function. This is basically implementing one-off virtual inheritence, though, and if you're implementing this all in C++ (but looks like you're using C, maybe?) you'd be better off just using the facilities provided by the language.

 

If you take this approach, do provide a default function, even if what it does is force the program to exit with an error code (because its an error for the client to call getItemName with an ID that invokes getSpecialItemName when it has not been provided). You could say "If the function pointer is null, then skip it." but then A) you'd have to check at every call site, and B) what would you then return anyways? To be clear, there's nothing preventing client code from setting the function pointer to null, so this doesn't prevent that error, but giving it a non-null default makes later assigning it one an explicit error.




#5213400 Ray vs Sphere Issue

Posted by Ravyne on 27 February 2015 - 04:04 PM

If you don't somehow ignore what's behind you, then you're not dealing with a ray, you're dealing with a line instead -- because ray's are directed, of course.

 

You can do that by culling, or you can do that by accepting all collisions on a preliminary basis, and then rejecting all but the nearest non-negative collision (or nearest negative, I suppose, depending on whether your frame of reference looks in a positive or negative direction down the axis). But, you need to preserve the signedness of the distance then, if you just have distance it's always positive, you can do a simple half-space test against a plane through your ray origin, and perpendicular to its direction.




#5213184 Will game maker hurt me in the long run?

Posted by Ravyne on 26 February 2015 - 02:31 PM


If you are serious about making games, you will not find it as a credential for getting a games job. I believe Game Maker is a purely hobbyist tool.

 

If your resume says, effectively, "I once made a game no one knows about in game-maker, download it here to see how unpolished and buggy it is." then yes, its not a very effective credential. Nor will it ever be an effective credential if your aim is to become a graphics or engine programmer.

 

If, on the other hand, you can show that you brought to completion a highly-polished, relatively bug-free game that at least the people who've played it seem to enjoy, even if there's not all that many of them, then that's an excellent credential for many roles in the games industry. Not the only one you need, most likely, but it makes a positive note on one of the most crucial credentials -- the ability to make and ship something complete and polished, and possibly to take and implement user feedback.

 

Fully 60% of the top-grossing mobile games are made in Unity, which is not very far removed from GameMaker -- I daresay that anyone who can make a very complete and very polished game in GameMaker is fully capable of wielding, or learning to wield, Unity very effectively.




#5213180 Non-Member Functions Improve Encapsulation?

Posted by Ravyne on 26 February 2015 - 02:17 PM

I think your confusion is that what you now understand to be a synonym for 'encapsulation' is that you have a class, and inside it you have all its data and operations -- this satisfies a goal of encapsulating the class's internals from the outside world, and that's indeed a good thing.

 

This is "encapsulation 101" so to speak, which is the view that every class is an island unto itself.

 

 

But what you might notice in some of the member functions of your class, is that you might have several of them that are (or could be) purely implemented in terms of other members of your class's public interface (both member functions and member variables). When you have such a function that could be implemented in terms of existing public interfaces, but you instead make it a member, now that function has access to all the protected and private members that it doesn't need to do its job -- while this may seem innocuous at first, often the members and variables that are marked protected or private are marked so because they're either a shared utility (in which case, your member that could be implemented as a non-member doesn't need access), or they're involved in the internal book-keeping of the class (in which case, its dangerous to give that kind of power away to any member who doesn't need it.) In short, even though your non-member-candidate is comfortable being a member, this decreases encapsulation by granting it powers and access that it does not need to to its job.

 

This is encapsulation 201 -- Here, encapsulation is viewed to mean that any code construct, all the way down to single functions, should strive to have only the minimum powers and access that it needs to perform its job. This transforms your job as a class designer from someone who blindly puts all the related parts and functions into the same bag, to someone who considers all the parts and functions that are needed, and then chooses the minimum set of parts and functions that can be put in the bag for which the others can be implemented in terms of what's in the bag.

 

This may seem like a somewhat academic exercise, in which you are erecting walls to protect yourself from self-sabotage, and you are. But it is not academic -- as one example of a benefit, should you ever find a bug in your non-member and you know that your public members do not misuse the private and protected ones, you can instantly narrow your search, based on the fact that you know it cannot access those protected or private members.

 

You are right though, that just making everything a non-member does not achieve encapsulation if as a result you simply take parts of the interface that should remain private and make them public so that you can access them via non-members (as in Getters and Setters, or simply making more members public). That is the opposite of encapsulation. A well-encapsulated class strives for its members to represent the smallest reasonable (that is, it may not be the absolute minima, but a minima tempered with pragmatism) interface that provides for its entire interface and general uses to be provided for.






PARTNERS