Jump to content

  • Log In with Google      Sign In   
  • Create Account

FREE SOFTWARE GIVEAWAY

We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.


Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!


Ravyne

Member Since 26 Feb 2007
Offline Last Active Today, 01:40 AM

#5194703 Optimizing 2D Tile Map

Posted by Ravyne on 25 November 2014 - 07:06 PM

If by objects, you mean things that have position on the map but are not "tiles" in the strictest sense, then no, you don't need to separate them from the tilemap, although you can if you want. What I meant was that it sounded like your map files also contained tile/object images inside of it, and that's what would be bad.

 

It looks like you're using about 17.4 bytes per tile space, which could probably be reduced but is not unreasonable either. My guess is that simply switching to binary will save you a good deal of time, you'll probably see load times cut in half or better just by that one change. If you can reduce the size of the map file, either by reducing the size of atomic data elements, reducing the map area contained in a file, or by using compression, you will see still-further gains that will be roughly linear (e.g. reducing the file-size by half will reduce load-time by half, since IO is such a bottleneck.)

 

Other things you should consider, that I neglected to mention before, is to make sure you pre-allocate all the memory your data structures will need. For example, don't just create an empty vector and then push items onto it because it'll grow as needed. Even though this works perfectly well, its far more efficient to tell the vector to pre-allocate space for the 10k tiles when you create it -- or before you start loading the tiles in -- you can still just push items onto it afterwards, though. Pre-allocate for everything you can, if you know how many of something you need before you need space for it, you can pre-allocate it -- if your file format doesn't tell you how many tiles/objects to expect, ad that information to the file so you can use it. If you can't, just pre-allocate to a reasonable guess, you might waste some memory, but its not a problem unless you're tight on memory already. That'll be another worthwhile boost if you're not already pre-allocating.

 

Finally, make sure you're walking memory linearly -- sometimes people walk memory down their columns rather than down their rows -- When you start talking about contiguous memory regions larger than 64k you're guaranteed to be blowing out your L1 data cache if you're walking memory the wrong way, you'll have a cache-miss literally every time (and that's on the newest Haswell processors, too). You'll probably see adverse affects even at 32k due to other tasks on the CPU. At the sizes you're talking about, between 128k and 256k, thats large enough to even impact the L2 cache. If you're doing that wrong, fixing it will be another worthwhile win -- affects on load-time will be nice, but other properly walking memory during runtime will pay big dividends too.




#5194537 Optimizing 2D Tile Map

Posted by Ravyne on 24 November 2014 - 10:08 PM

Its all going to come down to how big those map files are, how large their dependencies (if any) are, and what their structure is. In general, I wouldn't have thought they'd be large enough to form a bottleneck, so I would guess that either you are storing a lot of things in the file, storing it inefficiently, or perhaps are booting dependencies of the previously-loaded maps and then reloading it anew, which if true is probably the real source of the inneficiency -- I think your last paragraph indicates that this is what you're doing.

 

I would absolutely keep tile images separate from their maps. There may be tiles that are unique to a map, but its still probably better to store them separately. You for sure want to share common tiles between maps, and once you're doing that, making a special case to support tiles also coming from within the map itself is just extra work for no real gain. But you can do that if it floats your boat.

 

If you only have map data inside the file and its still a bottleneck, then you probably want to look first at whether you are storing your maps efficiently -- preferably something that can be loaded directly into memory with minimal translation. Text formats like XML or JSON are right out as a runtime format (though might be appropriate during production), even binary formats that require fine-grained parsing can be troublesome.

 

If you're sure that the size and the shape of your data isn't the problem, then you'll want to look breaking the map into even smaller sections (by the way, one of the advantages of moving tile data out of the map file itself, is that you can break up the map into smaller sections without affecting which file it is where those unique tiles land) -- maybe 64x64, 50x50, or 32x32. Even if you end up loading the same amount of a map, you can load the nearest parts first and continue on your way.

 

Finally, since its often faster to load and decompress compressed data than it is to read the uncompressed data from a disk, compression is another avenue that you could explore, either on its own with larger maps or combined with smaller maps for even further reduced load times. very simple run-length encoding will do pretty well, Huffman encoding will do even better -- its more complex, but there are very good free libraries for it. You could also use a .zip archive library for all of your game assets and let it handle it all -- again, there are good free libraries for this.

 

 

If you can describe the contents/format/size of your map files in more detail, we can help you out more, or I can give you a better idea of what parts of the above are most relevant to your situation.




#5194503 Best language for mobile apps

Posted by Ravyne on 24 November 2014 - 05:39 PM

C++ is well-supported on all* mobile platforms, and is portable for the most part. However, its a relatively difficult language compared to C# or Java -- that is, average reasonable quality code is easier to produce in C# or Java than it is in C++ -- The potential benefit of C++ is that you can really tune performance to the utmost if you have the skills and knowledge to do so, the potential cost is that C++ does none of the hand-holding that C# or Java do, and therefore requires greater knowledge and attention to detail to write bulletproof code.

 

* AFAIK, the only relevant mobile platform not open to C++ for non-professional development is the PS Vita -- while professionals use C++ there, non-professionals are limited to C#.

 

If you're interested in games specifically, Unity is a great choice. Visual Studio 2013 Community is a free version of Visual Studio (unrestricted for academic and open-source; and restricted for commercial teams to fewer than 5 devs) that supports plugins, including the excellent Visual Studio Tools for Unity, also free, which integrates Visual Studio as the code editor for Unity. Unity's primary programming language is C# (about 2/3rds), followed by UnityScript (JavaScript-like, about 1/3rd), and Boo taking up the rear with so little use its deprecated now.

 

Otherwise, for apps outside of games, Visual Studio again shows strongly using Xamarin with C# to produce apps that run on all the major mobile and desktop platforms with little platform-specific code. You can use Xamarin from other IDEs, but VS has great integration.

 

Between Java and C#, C# is the nicer language, but if you're already familiar with Java much beyond the basics, a switch is probably not the best way to spend your time and effort now.




#5193867 creating basic lobby services

Posted by Ravyne on 20 November 2014 - 03:40 PM

The main things you need are authentication, game-selection and/or matchmaking, and facilitating NAT punch-through. This could all be done as a RESTful service. I would suggest that the easiest way to design such a thing would be to have the lobby service deal with only one game, but you can run multiple instances of the service bound to different network endpoints to service multiple games.

 

Depending on how forward-looking you want to be, the next version of Windows server will support containerization (similar to linux containerization LXC, as popularized by Docker) which would probably be the best long-term solution of all. Azure will support this as soon as its ready. A container is essentially a way of running an application or service is a way that's completely isolated from other containers, and hence you can treat the system as if you're the only one running -- each container can have its own dependencies that might otherwise conflict with those required by other containers, for instance. It has the advantages of a VM but is lighter-weight because you're only using the resources required for the application and its dependencies, not a distinct OS stack for each application. You can do this on azure today if you wrote the service for linux, since you can run Linux on Azure. The benefit of containers is that you can test locally the same exact container that you will run in the cloud, or toss the container over to whoever provides you the best value for hosting containers. Google and lots of other huge web-services companies have (or are) adopting containers like mad because of their benefits for DevOps, they really are the wave of the future.




#5193862 C# .Net Open Source

Posted by Ravyne on 20 November 2014 - 03:08 PM

Its not going to take many existing java jobs away from places where java is already in place; the only reason such a migration might make sense is if a company was having trouble finding java people (or had an abundance of C# people) in their local area. For new jobs where neither Java nor C# is already in place, C# will be more attractive now -- to be perfectly blunt, C# is a better language than Java, full-stop. The only advantage Java has really had is that it had been more open, and had gotten a head-start, especially on non-Microsoft platforms. Through Mono, C# has already been an option in many places, but people are wary of Mono for fear of it not being "official" or for fear of Microsoft one day coming after them. Those concerns are now moot.

 

The core of .NET is open, but not everything. So you won't see total compatibility of any .net desktop application over night. What you will see, eventually, is that the open source core will be pulled into /drawn from in projects like Mono or Unity. As a result, those projects will have an easier time maintaining parity with language features, and will have more time to work on the things that aren't part of the open-source core. The runtime, and effectively the languages, are all part of that core though -- I think its just parts of the platform libraries that aren't open yet.

 


Poor cache awereness in the application code however might hurt the performance more, but then again if you dont do this in C++ you will have the a similar slowdown.

 

Its true, but the design of managed languages and the CLR give you less control over very precise behaviors of memory use. Cache-aware C# runs better than non-cache-aware C#, but will likely never run as well as cache-aware C or C++, and still lacks truly deterministic resource reclaimation which is also a hindrance to performance-tuned C#.

 


Unity3D has already started using C# as it's scripting Language and in conjuction with MS has developed a Visual Studio Plugin that will interface with it for writing your scripts while in Unity.

 

Actually, Microsoft bought a company called SyntaxTree who already made and sold a plugin called UnityVS. Those folks are now working as part of Microsoft, together with the Visual Studio folks to offer a better product. On top of that, the product, now called Unity Tools for Visual Studio, has been made fre, and there's now VS2013 Community that supports such plugins in a free version of Visual Studio. VS Community and UTVS are part of a general trend of making tools more accessible.




#5193739 Need a short name to replace a really really long function's name

Posted by Ravyne on 20 November 2014 - 01:55 AM

Saving keystrokes is never a good reason to abbreviate or give something a name that's less accurate/descriptive than one that's longer. By all means, use the shortest name that's accurate, but never sacrifice accuracy for brevity.

That being said, if you follow that guideline but find yourself having trouble giving things reasonably short names, then it can be an indication that your function is doing too much -- in particular, if you find yourself reaching for a conjunction like "and"/"or" its almost always a sign that you should split your function -- doing so will make your code more flexible and less coupled.


#5193700 Runge-Kutta in a large solar system

Posted by Ravyne on 19 November 2014 - 06:31 PM

When you're dealing with that much sparsely-populated space, and equations of that magnitude, its quite common to not deal with everything in a single space -- the numbers just start to break down unless you're willing to eat the cost of using math library with sufficiently-large precision, you won't get it out of floats or doubles. I'd probably have each planet be its own coordinate system, which itself orbits the sun's coordinate system. Likewise for moons as they relate to their planets. Then, based on the masses of the bodies involved, you'll be able to determine a radius within which that body is the one exerting the most force, and you can either switch over to just using that one, or you can also calculate for other bodies and interpolate the forces -- depends on how accurate you want to be.




#5193306 Nice-looking tile-based maps

Posted by Ravyne on 17 November 2014 - 03:12 PM

In general, large objects in a tile-based game are very often pieced together from individual tiles. In other words, you do actually put each tiled part of a house into the tilemap -- the top-left, bottom-left, top-middle ... and so on. This is simpler because the map data is regular -- tools can have higher-level ideas about tile-groupings representing, say, a house or a tree, to make creating maps more convenient. Sometimes games mix large and small objects, but its often not terribly flexible because you're stuck with the whole thing and can't just switch up a few tiles to create variations -- for example, in a typical RPG each kind of house might have a tile that represents a "pristine" wall, and another variation that represents a "cracked wall", or maybe has some other ornament that makes it different. Then designers can sprinkle those variations in to create variety without having a whole large, distinct image.




#5193123 Nice-looking tile-based maps

Posted by Ravyne on 16 November 2014 - 01:43 PM

You want to create something like an array or dictionary (or map, if that's what Java calls it) of your tiles rather than putting each tile into its own variable. Then, you store the index or key of the right tile in each cell of your map. You can then get the tile itself by accessing the array/dictionary with the index/key, rather than using some kind of index/key with a switch statement (which is what I'd guess you're doing now).


#5192458 Building A Game Engine + Game

Posted by Ravyne on 12 November 2014 - 12:47 PM

Writing a little-e engine isn't hard -- writing a big-E Engine is. If your needs are simple, small, well-defined, writing your own engine is viable -- if you have a commercial interest or need major features that are missing from commercial middle-ware, writing your own Engine is viable. Otherwise, middle-ware engines are attractive -- sometimes they're expensive, but how much time and effort will you expend before you reach break-even?

 

If your needs are complex, large, evolving, then your little-e engine is going to have a very hard time keeping up. Your little-e engine is not Unreal, or Source, or Unity for that matter. They're generally well-proven, adaptable, scalable, and battle-tested -- things a little-e engine usually isn't.

 

That said, OP really does need to learn to crawl -- its hard to say where to start because we don't know what their experience level is. Often times, jumping into an Engine, even a simple one like Unity, for someone who's plain inexperienced isn't going to be as educational as starting with a basic gaming library like SDL2 or SFML -- in fact, the shear breadth of something like Unity can be frustrating and overwhelming for a green programmer just trying to make a simple idea happen because they have no idea how to start. But for someone who knows generally what they're doing with their programming language and who has fairly grand ambitions, an Engine like Unity or Source, or Unreal is probably the fastest path to success, and especially if they have commercial ambitions time-to-market is a strong argument on its own.




#5192323 Resource management

Posted by Ravyne on 11 November 2014 - 05:30 PM


All of that description can be satisfied with a simple handle or proxy object. You have a handle/proxy to the resource. It serves as a long-term reference that can even be persisted.

 

Yes, exactly. What I'm talking about is an exercise in imagining a non-intrusive, non-centrally-managed system for proxy/handle semantics. You could accurately call the co-pointer I've mentioned "proxy_ptr" if you wanted to (I'm going to for the rest of this reply, just for simplicity). The somewhat new bit compared to the proxy systems I've seen is that I'm using an owning pointer (something like shared_ptr) and an external control block to facilitate notification, rather than central management, or by intruding on the proxied object or its interface. I think that's useful enough on its own, but I also think there's some room to be more efficient than, say, just using shared_ptr. At this point, I think my weekend project will be to write it up and benchmark it to get a more concrete understanding of its realities.

 


Maybe I don't understand your issue correctly, but isn't that exactly what one would want?

 

Absolutely, sometimes, probably even most times. What you go on to describe are indeed valid concerns and useful and good semantics if that is your need. In fact, its exactly the semantics of my own current resource manager that I've been quite happy with. But I think its not always the need. There already are a number of people in the "immediate-single-use proxy camp" I'm not inventing that -- just looking for a different solution than is typical and one which ends up looking a lot like shared_ptr/weak_ptr.

 

I concede that the semantics of my proxy_ptr don't prevent the object from being ripped out from underneath you by a poorly-timed object de-allocation if you hold onto the reference at all, but neither do non-locking proxy systems of any description, or raw pointers in a multi-threaded context for that matter. And one might reasonably object that if you somehow know that a resource is valid anyway, then a raw-pointer is sufficient -- and it is. But sometimes you might not know, or what you did know when you passed the pointer might have changed before you come to use it -- a proxy_ptr as I describe would be useful for independently propagating a checked, immediate-single-use pointer across an epoch where its resource might or might not have been deleted.

 


In addendum about shared_ptr inc/dec costs: You shouldn't be doing a lot of those anyway.

 

Agreed. I've already conceded that using a combination of shared_ptr/weak_ptr/raw pointers smartly goes a long, long way towards achieving efficiency. But it also necessarily gives you semantics you might not want, and exposes you to things you might not want to be exposed to -- e.g. leaking a shared_ptr or creating cycles between shared_ptrs that prevents the object from being deleted. Now, those problems are indicative of some other bug, and ideally would be addressed as such, but those tend to be difficult bugs to track down and I've seen more than a few go out into the wild since leaking a little memory usually isn't a catastrophic issue. Certainly other potential bugs come part and parcel with proxy_ptr, but they'd be of the immediately-crashing type rather than the silently-consume-memory type, and I'd rather have the former since it comes with a clue about what needs fixing. That said, I don't know quite what the efficiency wins might be in exact quantities; I think they're there, but proxy_ptr probably isn't very attractive without some gains;

 

Anyhow, like I said, I think I'm going to write this up this weekend and see what I find in terms of implementation, performance, and properties. When I manage to get to it I'll report back what I find. I'll start a new discussion thread and link to it from here. I suspect it'll be illuminating even if it ends up being a wild goose chase.




#5192161 Steam sales vs. own Website sales?

Posted by Ravyne on 10 November 2014 - 04:21 PM

I've never seen specific numbers, but everything I've ever read on the topic is that Steam sales lead by a wide margin -- even some of the indie games that also appear on XBox Live Marketplace or Playstation Store typically report that Steam is their biggest earner. Steam exposure isn't going to drive buyer traffic to your website -- traffic, sure, but if they've found you via steam, its very likely they'll buy you there too. The overwhelming majority of people that will buy directly from you will do so because they're not already Steam customers (and didn't find you through Steam), therefore it stands to reason that only the non-steam customers your website draws itself is your base for direct sales -- there's little or no direct halo-effect from being on steam, other than maybe increased word-of-mouth and general visibility.

 

I think Amanita's games are an exception -- for one, their games aren't exactly a great fit for steam and steam's users aren't a great fit for them; The fact that steam *still* managed to account for half of their sales probably says more in favor of steam than their 50% says about their own website or marketing -- of course, it also says that for a certain kind of game, distributing via other channels or your own channels is necessary for success.




#5192137 Software Fallback?

Posted by Ravyne on 10 November 2014 - 03:03 PM

It could be drivers, but beware that integrated GPU is a couple generations old and it was really only with the HD4000 series that Intel's IGP performance became acceptable. Its just a lower performing part than what you might have your local tests running on. Also, since its old, it might lack features you're using and that could cause a software fallback to jump in like you expect. The best idea is probably to investigate and find out if this is the case, then implement an alternate rendering path that's more optimal for HD3000. There's a lot of HD3000 out there, so for a simple game that ought to run on that hardware its good for you to make sure it runs well, because that's a lot of customers.




#5192102 Steam sales vs. own Website sales?

Posted by Ravyne on 10 November 2014 - 11:51 AM

That'll depend entirely on your own notoriety. Some people prefer to avoid Steam, but its relatively few. Steam isn't just a way of selling your game, the steam platform is an audience -- one that has millions of daily regulars. If you can generate similar foot-traffic to your own site, you'll probably sell a similar number of copies (at least within a factor of 2-3 probably) but that's easier said than done. The amount of unique user traffic you generate is the upper-bound on potential sales through your site, of course -- if you don't draw people in, they can't buy your game there.




#5192099 Good C (not C++) math library?

Posted by Ravyne on 10 November 2014 - 11:44 AM

I believe part of the reason for this is that most of the very general math libraries like BLAS (You can find a link to CBLAS for C here) favor C++ because it allows for optimizations that I don't know how they'd be accomplished in C -- things like using template tricks to elide unnecessary temporaries so that optimal code is produced. Since the folks using these things are all about performance, their efforts followed to C++ to a large degree.

 

What kind of functionality do you need out of this library? Small vectors and matrices (DIM <= 4), large vectors and matrices? sparse matrices? Quaterneons? Statistical solvers and such?

 

If you be specific, someone might know of something to help you.






PARTNERS