Jump to content

  • Log In with Google      Sign In   
  • Create Account

Interested in a FREE copy of HTML5 game maker Construct 2?

We'll be giving away three Personal Edition licences in next Tuesday's GDNet Direct email newsletter!

Sign up from the right-hand sidebar on our homepage and read Tuesday's newsletter for details!

We're also offering banner ads on our site from just $5! 1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Member Since 26 Feb 2007
Offline Last Active Today, 01:16 PM

#5193862 C# .Net Open Source

Posted by Ravyne on 20 November 2014 - 03:08 PM

Its not going to take many existing java jobs away from places where java is already in place; the only reason such a migration might make sense is if a company was having trouble finding java people (or had an abundance of C# people) in their local area. For new jobs where neither Java nor C# is already in place, C# will be more attractive now -- to be perfectly blunt, C# is a better language than Java, full-stop. The only advantage Java has really had is that it had been more open, and had gotten a head-start, especially on non-Microsoft platforms. Through Mono, C# has already been an option in many places, but people are wary of Mono for fear of it not being "official" or for fear of Microsoft one day coming after them. Those concerns are now moot.


The core of .NET is open, but not everything. So you won't see total compatibility of any .net desktop application over night. What you will see, eventually, is that the open source core will be pulled into /drawn from in projects like Mono or Unity. As a result, those projects will have an easier time maintaining parity with language features, and will have more time to work on the things that aren't part of the open-source core. The runtime, and effectively the languages, are all part of that core though -- I think its just parts of the platform libraries that aren't open yet.


Poor cache awereness in the application code however might hurt the performance more, but then again if you dont do this in C++ you will have the a similar slowdown.


Its true, but the design of managed languages and the CLR give you less control over very precise behaviors of memory use. Cache-aware C# runs better than non-cache-aware C#, but will likely never run as well as cache-aware C or C++, and still lacks truly deterministic resource reclaimation which is also a hindrance to performance-tuned C#.


Unity3D has already started using C# as it's scripting Language and in conjuction with MS has developed a Visual Studio Plugin that will interface with it for writing your scripts while in Unity.


Actually, Microsoft bought a company called SyntaxTree who already made and sold a plugin called UnityVS. Those folks are now working as part of Microsoft, together with the Visual Studio folks to offer a better product. On top of that, the product, now called Unity Tools for Visual Studio, has been made fre, and there's now VS2013 Community that supports such plugins in a free version of Visual Studio. VS Community and UTVS are part of a general trend of making tools more accessible.

#5193739 Need a short name to replace a really really long function's name

Posted by Ravyne on 20 November 2014 - 01:55 AM

Saving keystrokes is never a good reason to abbreviate or give something a name that's less accurate/descriptive than one that's longer. By all means, use the shortest name that's accurate, but never sacrifice accuracy for brevity.

That being said, if you follow that guideline but find yourself having trouble giving things reasonably short names, then it can be an indication that your function is doing too much -- in particular, if you find yourself reaching for a conjunction like "and"/"or" its almost always a sign that you should split your function -- doing so will make your code more flexible and less coupled.

#5193700 Runge-Kutta in a large solar system

Posted by Ravyne on 19 November 2014 - 06:31 PM

When you're dealing with that much sparsely-populated space, and equations of that magnitude, its quite common to not deal with everything in a single space -- the numbers just start to break down unless you're willing to eat the cost of using math library with sufficiently-large precision, you won't get it out of floats or doubles. I'd probably have each planet be its own coordinate system, which itself orbits the sun's coordinate system. Likewise for moons as they relate to their planets. Then, based on the masses of the bodies involved, you'll be able to determine a radius within which that body is the one exerting the most force, and you can either switch over to just using that one, or you can also calculate for other bodies and interpolate the forces -- depends on how accurate you want to be.

#5193306 Nice-looking tile-based maps

Posted by Ravyne on 17 November 2014 - 03:12 PM

In general, large objects in a tile-based game are very often pieced together from individual tiles. In other words, you do actually put each tiled part of a house into the tilemap -- the top-left, bottom-left, top-middle ... and so on. This is simpler because the map data is regular -- tools can have higher-level ideas about tile-groupings representing, say, a house or a tree, to make creating maps more convenient. Sometimes games mix large and small objects, but its often not terribly flexible because you're stuck with the whole thing and can't just switch up a few tiles to create variations -- for example, in a typical RPG each kind of house might have a tile that represents a "pristine" wall, and another variation that represents a "cracked wall", or maybe has some other ornament that makes it different. Then designers can sprinkle those variations in to create variety without having a whole large, distinct image.

#5193123 Nice-looking tile-based maps

Posted by Ravyne on 16 November 2014 - 01:43 PM

You want to create something like an array or dictionary (or map, if that's what Java calls it) of your tiles rather than putting each tile into its own variable. Then, you store the index or key of the right tile in each cell of your map. You can then get the tile itself by accessing the array/dictionary with the index/key, rather than using some kind of index/key with a switch statement (which is what I'd guess you're doing now).

#5192458 Building A Game Engine + Game

Posted by Ravyne on 12 November 2014 - 12:47 PM

Writing a little-e engine isn't hard -- writing a big-E Engine is. If your needs are simple, small, well-defined, writing your own engine is viable -- if you have a commercial interest or need major features that are missing from commercial middle-ware, writing your own Engine is viable. Otherwise, middle-ware engines are attractive -- sometimes they're expensive, but how much time and effort will you expend before you reach break-even?


If your needs are complex, large, evolving, then your little-e engine is going to have a very hard time keeping up. Your little-e engine is not Unreal, or Source, or Unity for that matter. They're generally well-proven, adaptable, scalable, and battle-tested -- things a little-e engine usually isn't.


That said, OP really does need to learn to crawl -- its hard to say where to start because we don't know what their experience level is. Often times, jumping into an Engine, even a simple one like Unity, for someone who's plain inexperienced isn't going to be as educational as starting with a basic gaming library like SDL2 or SFML -- in fact, the shear breadth of something like Unity can be frustrating and overwhelming for a green programmer just trying to make a simple idea happen because they have no idea how to start. But for someone who knows generally what they're doing with their programming language and who has fairly grand ambitions, an Engine like Unity or Source, or Unreal is probably the fastest path to success, and especially if they have commercial ambitions time-to-market is a strong argument on its own.

#5192323 Resource management

Posted by Ravyne on 11 November 2014 - 05:30 PM

All of that description can be satisfied with a simple handle or proxy object. You have a handle/proxy to the resource. It serves as a long-term reference that can even be persisted.


Yes, exactly. What I'm talking about is an exercise in imagining a non-intrusive, non-centrally-managed system for proxy/handle semantics. You could accurately call the co-pointer I've mentioned "proxy_ptr" if you wanted to (I'm going to for the rest of this reply, just for simplicity). The somewhat new bit compared to the proxy systems I've seen is that I'm using an owning pointer (something like shared_ptr) and an external control block to facilitate notification, rather than central management, or by intruding on the proxied object or its interface. I think that's useful enough on its own, but I also think there's some room to be more efficient than, say, just using shared_ptr. At this point, I think my weekend project will be to write it up and benchmark it to get a more concrete understanding of its realities.


Maybe I don't understand your issue correctly, but isn't that exactly what one would want?


Absolutely, sometimes, probably even most times. What you go on to describe are indeed valid concerns and useful and good semantics if that is your need. In fact, its exactly the semantics of my own current resource manager that I've been quite happy with. But I think its not always the need. There already are a number of people in the "immediate-single-use proxy camp" I'm not inventing that -- just looking for a different solution than is typical and one which ends up looking a lot like shared_ptr/weak_ptr.


I concede that the semantics of my proxy_ptr don't prevent the object from being ripped out from underneath you by a poorly-timed object de-allocation if you hold onto the reference at all, but neither do non-locking proxy systems of any description, or raw pointers in a multi-threaded context for that matter. And one might reasonably object that if you somehow know that a resource is valid anyway, then a raw-pointer is sufficient -- and it is. But sometimes you might not know, or what you did know when you passed the pointer might have changed before you come to use it -- a proxy_ptr as I describe would be useful for independently propagating a checked, immediate-single-use pointer across an epoch where its resource might or might not have been deleted.


In addendum about shared_ptr inc/dec costs: You shouldn't be doing a lot of those anyway.


Agreed. I've already conceded that using a combination of shared_ptr/weak_ptr/raw pointers smartly goes a long, long way towards achieving efficiency. But it also necessarily gives you semantics you might not want, and exposes you to things you might not want to be exposed to -- e.g. leaking a shared_ptr or creating cycles between shared_ptrs that prevents the object from being deleted. Now, those problems are indicative of some other bug, and ideally would be addressed as such, but those tend to be difficult bugs to track down and I've seen more than a few go out into the wild since leaking a little memory usually isn't a catastrophic issue. Certainly other potential bugs come part and parcel with proxy_ptr, but they'd be of the immediately-crashing type rather than the silently-consume-memory type, and I'd rather have the former since it comes with a clue about what needs fixing. That said, I don't know quite what the efficiency wins might be in exact quantities; I think they're there, but proxy_ptr probably isn't very attractive without some gains;


Anyhow, like I said, I think I'm going to write this up this weekend and see what I find in terms of implementation, performance, and properties. When I manage to get to it I'll report back what I find. I'll start a new discussion thread and link to it from here. I suspect it'll be illuminating even if it ends up being a wild goose chase.

#5192161 Steam sales vs. own Website sales?

Posted by Ravyne on 10 November 2014 - 04:21 PM

I've never seen specific numbers, but everything I've ever read on the topic is that Steam sales lead by a wide margin -- even some of the indie games that also appear on XBox Live Marketplace or Playstation Store typically report that Steam is their biggest earner. Steam exposure isn't going to drive buyer traffic to your website -- traffic, sure, but if they've found you via steam, its very likely they'll buy you there too. The overwhelming majority of people that will buy directly from you will do so because they're not already Steam customers (and didn't find you through Steam), therefore it stands to reason that only the non-steam customers your website draws itself is your base for direct sales -- there's little or no direct halo-effect from being on steam, other than maybe increased word-of-mouth and general visibility.


I think Amanita's games are an exception -- for one, their games aren't exactly a great fit for steam and steam's users aren't a great fit for them; The fact that steam *still* managed to account for half of their sales probably says more in favor of steam than their 50% says about their own website or marketing -- of course, it also says that for a certain kind of game, distributing via other channels or your own channels is necessary for success.

#5192137 Software Fallback?

Posted by Ravyne on 10 November 2014 - 03:03 PM

It could be drivers, but beware that integrated GPU is a couple generations old and it was really only with the HD4000 series that Intel's IGP performance became acceptable. Its just a lower performing part than what you might have your local tests running on. Also, since its old, it might lack features you're using and that could cause a software fallback to jump in like you expect. The best idea is probably to investigate and find out if this is the case, then implement an alternate rendering path that's more optimal for HD3000. There's a lot of HD3000 out there, so for a simple game that ought to run on that hardware its good for you to make sure it runs well, because that's a lot of customers.

#5192102 Steam sales vs. own Website sales?

Posted by Ravyne on 10 November 2014 - 11:51 AM

That'll depend entirely on your own notoriety. Some people prefer to avoid Steam, but its relatively few. Steam isn't just a way of selling your game, the steam platform is an audience -- one that has millions of daily regulars. If you can generate similar foot-traffic to your own site, you'll probably sell a similar number of copies (at least within a factor of 2-3 probably) but that's easier said than done. The amount of unique user traffic you generate is the upper-bound on potential sales through your site, of course -- if you don't draw people in, they can't buy your game there.

#5192099 Good C (not C++) math library?

Posted by Ravyne on 10 November 2014 - 11:44 AM

I believe part of the reason for this is that most of the very general math libraries like BLAS (You can find a link to CBLAS for C here) favor C++ because it allows for optimizations that I don't know how they'd be accomplished in C -- things like using template tricks to elide unnecessary temporaries so that optimal code is produced. Since the folks using these things are all about performance, their efforts followed to C++ to a large degree.


What kind of functionality do you need out of this library? Small vectors and matrices (DIM <= 4), large vectors and matrices? sparse matrices? Quaterneons? Statistical solvers and such?


If you be specific, someone might know of something to help you.

#5192088 Resource management

Posted by Ravyne on 10 November 2014 - 11:23 AM

I can't think of any need for a "weak_ptr" for a unique_ptr. Because as soon as you have something like that it's no longer unique by simple virtue that in order to use the object you have to share ownership for a while so the unique_ptr doesn't delete it. It just makes no logical sense. Not to mention that in order to implement something like that you'd lose all the advantages of unique_ptr in the first place - namely the fact that unique_ptr is the exact same cost and speed as a raw pointer. You might as well use shared_ptr if you need a weak_ptr-like access because that's what you really wanted in the first place.

As far as I can tell from that proposal - they just re-implemented raw pointers with a fancy name. So... use a raw pointer. There is no mechanism for clearing the "observer pointer" when the unique_ptr nukes itself simply because they unique_ptr has no way to tell anyone that it's dead because that would require additional data that people don't want in unique_ptr in the first place.


I don't think its hard at all to imagine what the desired and useful semantics of such a pointer would be -- You want a single pointer that owns its resource, and a co-pointer that can check whether its still valid before returning a for-immediate-use-only non-owning pointer. The owning pointer would have the move/copy semantics of unique_ptr, but would itself be the same size as a shared_ptr (to a control block, or just point to the control block and pay a double-indirection cost to get to its resource, just like shared_ptr could); it would have a ref-counted control block that would work mostly like shared_ptr, except it would have only the equivalent of weak_count. The "handle" pointer would behave mostly identical to weak_ptr, except it would return the pointer (or nullptr), not a shared_ptr -- and it would express single-use, non-owning semantics. Together, they could allow for the resource to be moved if necessary. These are the same semantics that many handle/body systems provide, like the one Jason described, albeit many of them accomplish it via a table-based manager class similar to what you describe.


Now, having thought about it over the weekend, I agree that that it doesn't appear to have much advantage over just using shared_ptr at first blush -- after all, you could wrap a weak_ptr in a simple facade and just use a shared_ptr that you never share and get the same semantics (* that's a half-truth I'll get to in a second) as I describe for my imagined co-pointers if you're happy to pay the overhead of doing so, and on the surface it doesn't appear that a custom implementation saves us much, in fact, it only saves one, maybe two, counters as near as I can tell. But, that's not the whole story...


Now for the half-truth. The main expense of using shared_ptr isn't the size of the control block, or even the fact that its twice as large as a raw pointer -- its that every time you create or destroy a shared_ptr to the same resource you have to jump through hoops to be thread-safe while you change the use_count and potentially destroy the object. The semantics described by my imagined co-pointers don't require this kind of heavy synchronization because only the owning pointer determines the lifetime of the object, only the increment/decrement of the weak_count needs to be synchronized, and many processors have ISA-level support for that. The only time heavy synchronization might be required is when the owning pointer deletes the resource and informs the control block, but I think even that isn't necessary. Now, simply not over-using shared_ptr by passing it all the way down your call-stacks can save a lot of overhead itself, but the co-pointers I describe don't really have this flaw to begin with -- pass them around all you want, there's not much cost over a raw pointer except an extra pointer on the stack and an atomic increment.


But, I also don't disagree with your assertion that many times you do want to take a temporary ownership interest in something, and for that shared_ptr/weak_ptr express the right semantics. There are really two camps regarding resource managers -- the kind where ownership flows into the hands of a user and can be implemented with shared_ptr/weak_ptr, and the handle/body style that usually uses a table-based manager -- the co-pointers I describe aren't really proposing new semantics, just standard semantics for handle/body that doesn't require centralized management. Neither camp is wrong, its just a different philosophy that offers different trade-offs.



Regarding the proposal for observer_ptr -- yes, its little more than a semantic layer over a raw pointer, but that alone is incredibly useful. With other builtin types, like int or char, you know all there is to know about them that the language could reasonably tell you -- you might not know what purpose they serve algorithmically, but you have a reasonable idea of where the fences are, at least. But its never been so with pointers -- If you have a pointer to an int you maybe knew about this int what you always know about int, but you can't be certain without more context and you certainly don't have any earthly idea what semantics the pointer itself was meant to convey -- does it own or not own what it points to? Is what it points to on the stack or heap? Does it point into an array? How was that array allocated? Is it really pointing to a memory-mapped register? Good naming helps convey intent, but it doesn't erect any fences. Interestingly, if you read in that paper, you'll see its taken a lot longer for everyone to agree on the move/copy semantics observer_ptr should have than one would think if it really were as simple as doing what a raw pointer would do -- it's contentious enough that it's still a proposal and didn't make it into either C++11 or C++14. 

#5191748 Resource management

Posted by Ravyne on 07 November 2014 - 08:43 PM

What's described in that article shares some of what I envision for the re-write of my ResourceCache, so thanks for the link.


In most ways I can think of, weak_ptr serves the function of a proxy, except that:

  • It allows you only to rehydrate a a shared_ptr, which means you're taking an owning interest in the resource, if only temporary. There's a risk, then, of forgetting to release that shared_ptr mistakenly, causing the resource to leak.
  • Because it relys on the shared_ptr control block, it can't be used with unique_ptr, which would better express the semantics of your solution if you wanted the cache/pool to be the sole owner and is also less efficient. Because weak_ptr necessarily leaks its underlying shared_ptr dependency, there's no way to work around this (I suppose you could wrap weak_ptr in another class and return that, but that's just boilerplate and doesn't resolve the inefficiency).


So, it seemed to me that there's a legitimate need for something like weak_ptr, but for unique_ptr rather than shared_ptr (that is: weak_ptr is to shared_ptr, as ??? is to unique_ptr). Other's agree, as a little google-fu lead me to this question on StackOverflow, and a subsequent answer mentioning this observer_ptr proposal for C++, which acknowledges the need for something similar, though it only goes as far as being a standard semantic layer over a raw pointer and less so a functional one. It provides useful semantics for pointers intended as observers, but doesn't provide a means to check whether what it observes is still valid.


So it seems to me now that there's actually a missing pair of smart pointers that together implement the semantics of a uniquely-owned object and of observers that ensure the object is still alive before returning a one-time-use pointer to it. This would differ from shared_ptr/weak_ptr by its shared_ptr equivalent being strictly lighter weight, and by its weak_ptr equivalent offering no implication of dehydrating an ownership interest in the object from it.


Something like the proposed std::optional of unique_ptr combined with something weak_ptr-like but aware of objects "optional-ness" seems to give the right semantics (though I haven't scrutinized it too hard), but I wonder how std::optional is intended to be implemented/specialized and if a special implementation of this observed and observable smart pointer pair could be made more efficient by clever implementation (e.g. maybe as a tagged-pointer).

#5191727 Is it viable to distribute your game via USB flash stick?

Posted by Ravyne on 07 November 2014 - 03:09 PM

You're looking for something less expensive than what, exactly? Digital distribution via App Stores and Steam (approximately 30% 'distribution' fee?)


While I do think that 30% is too high (quick plug for the Windows Store: They drop to 20% after you make 50K USD), its important to realize that you're getting a lot more than just distribution on those channels. Firstly, you get the benefit of their platform having an in-built audience who already has their payment details in the system; do not underestimate the benefit of impulse purchasing -- You'll lose a lot of sales merely for requiring a would-be customer to fill in their details in yet another payment system. You also get the benefit of their reputation -- people know and trust that Valve and Apple have policies to ensure there's no malware hitching a ride on their purchases, and that other policies generally require the software to be of reasonable quality.


You, on your own, don't have those benefits. You can use established payment processors like PayPal or Amazon payments but still some people won't be in whatever systems you use. Its a non-trivial amount of work to support each payment system you decide to use, too, and to manage the flow of money between them and deal with all the different receipt formats. But even if you solve those problems, the bigger issue is the trust problem -- there's really nothing you can do to get people to trust you, other than to build a strong reputation for integrity of time -- lots of time. Too much time to help you be successful in the near-term.


Between USB stick or CD/DVD, there's not a lot of difference. Discs will be cheaper, but you need to be able to do 1000 qty orders before you get to a proper, pressed disc (they way mass-market discs are produced) with printed art -- any less and your disks will come from a wall of DVD burners and have a glossy sticker pressed on. When I looked at these costs years ago, it cost about 2-3 USD per unit at a quantity of 2000 -- that was a CMYK-printed, pressed DVD, with an 8-page instruction manual (CMYK cover, B/W inside), inside a DVD case with CMYK sleeve insert, all shrinkwrapped. It meant laying out 4-6 thousand bucks up front, with the possibility of not selling them all, and needing to sell 400-600 units at $10 just to break even. You can get 4GB USB sticks pre-loaded with your software in similar quantities at prices near the higher end of disc production, and you can even choose from some standard body designs and get your logo or artwork on them. For more, you can get custom bodies injection-molded for your USB stick, and/or have them packaged in a nice tin or box.


[EDIT] Looking at the link Servant provided, it seems the cost of disk production has gone down a bit -- I can configure a similar production run to what I described for about $3000. USB sticks are far more expensive at over $10000 for 2000 units in reasonable packaging, but I've seen cheaper.


At this point, I would not really consider anything other than digital ditribution, for the bulk of sales. I'd consider a physical release mostly as a sort of special perk to those who might want such a thing, but only if I were able to charge a price that made financial sense for what would probably be an order for less than 1000 units. You really have to have quite a popular game to support a physical release.

#5191454 Resource management

Posted by Ravyne on 05 November 2014 - 11:26 PM

Perhaps a ModelManager and a TextureManager, but not a ResourceManager.


Right, but let it be noted that if your "manager" only has one responsibility, this is a clear case for making it a template class. Even if you need specialized behavior based on the resource type (e.g. a custom allocator) you can inject that at when you instantiate the manager too. That's part of my own solution, 


Once you have loaded the resource, just how often are you planning to pass, by copy, the shared pointer?  You know that if you pass a shared pointer by reference it doesn’t change the reference count (no overhead) right


That's true, but its not quite the whole story. If you were to take a reference to a shared_ptr in this way, you have a way to get the resource but (like passing a raw pointer to the contained resource, as I suggested above) having the pointer or reference doesn't take an ownership interest in the resource -- either method requires that you *know for certain* that the resource can't be lost for as long as you hold that pointer or reference.


You have a couple options for passing resources without using a shared_ptr --

  • As described, used a raw pointer or reference (to the resource or to the shared_ptr) if the resource is guaranteed to outlive the pointer/reference. There's no way to check whether the resource is still valid, you can only assume.
  • Use std::weak_ptr -- this is no lighter than shared_ptr, but expresses different semantics. Instead of making an implicit promise that the resource will outlive it, using weak_ptr says "I have an interest in using the resource later, but only if it still exists and I don't care whether it does. I'll check first." weak_ptr allows you to check, and to take a shared_ptr from it later if you want.