Jump to content

  • Log In with Google      Sign In   
  • Create Account

We need your help!

We need 7 developers from Canada and 18 more from Australia to help us complete a research survey.

Support our site by taking a quick sponsored survey and win a chance at a $50 Amazon gift card. Click here to get started!


Ravyne

Member Since 26 Feb 2007
Offline Last Active Yesterday, 05:57 PM

#5191727 Is it viable to distribute your game via USB flash stick?

Posted by Ravyne on 07 November 2014 - 03:09 PM

You're looking for something less expensive than what, exactly? Digital distribution via App Stores and Steam (approximately 30% 'distribution' fee?)

 

While I do think that 30% is too high (quick plug for the Windows Store: They drop to 20% after you make 50K USD), its important to realize that you're getting a lot more than just distribution on those channels. Firstly, you get the benefit of their platform having an in-built audience who already has their payment details in the system; do not underestimate the benefit of impulse purchasing -- You'll lose a lot of sales merely for requiring a would-be customer to fill in their details in yet another payment system. You also get the benefit of their reputation -- people know and trust that Valve and Apple have policies to ensure there's no malware hitching a ride on their purchases, and that other policies generally require the software to be of reasonable quality.

 

You, on your own, don't have those benefits. You can use established payment processors like PayPal or Amazon payments but still some people won't be in whatever systems you use. Its a non-trivial amount of work to support each payment system you decide to use, too, and to manage the flow of money between them and deal with all the different receipt formats. But even if you solve those problems, the bigger issue is the trust problem -- there's really nothing you can do to get people to trust you, other than to build a strong reputation for integrity of time -- lots of time. Too much time to help you be successful in the near-term.

 

Between USB stick or CD/DVD, there's not a lot of difference. Discs will be cheaper, but you need to be able to do 1000 qty orders before you get to a proper, pressed disc (they way mass-market discs are produced) with printed art -- any less and your disks will come from a wall of DVD burners and have a glossy sticker pressed on. When I looked at these costs years ago, it cost about 2-3 USD per unit at a quantity of 2000 -- that was a CMYK-printed, pressed DVD, with an 8-page instruction manual (CMYK cover, B/W inside), inside a DVD case with CMYK sleeve insert, all shrinkwrapped. It meant laying out 4-6 thousand bucks up front, with the possibility of not selling them all, and needing to sell 400-600 units at $10 just to break even. You can get 4GB USB sticks pre-loaded with your software in similar quantities at prices near the higher end of disc production, and you can even choose from some standard body designs and get your logo or artwork on them. For more, you can get custom bodies injection-molded for your USB stick, and/or have them packaged in a nice tin or box.

 

[EDIT] Looking at the link Servant provided, it seems the cost of disk production has gone down a bit -- I can configure a similar production run to what I described for about $3000. USB sticks are far more expensive at over $10000 for 2000 units in reasonable packaging, but I've seen cheaper.

 

At this point, I would not really consider anything other than digital ditribution, for the bulk of sales. I'd consider a physical release mostly as a sort of special perk to those who might want such a thing, but only if I were able to charge a price that made financial sense for what would probably be an order for less than 1000 units. You really have to have quite a popular game to support a physical release.




#5191454 Resource management

Posted by Ravyne on 05 November 2014 - 11:26 PM


Perhaps a ModelManager and a TextureManager, but not a ResourceManager.

 

Right, but let it be noted that if your "manager" only has one responsibility, this is a clear case for making it a template class. Even if you need specialized behavior based on the resource type (e.g. a custom allocator) you can inject that at when you instantiate the manager too. That's part of my own solution, 

 


Once you have loaded the resource, just how often are you planning to pass, by copy, the shared pointer?  You know that if you pass a shared pointer by reference it doesn’t change the reference count (no overhead) right

 

That's true, but its not quite the whole story. If you were to take a reference to a shared_ptr in this way, you have a way to get the resource but (like passing a raw pointer to the contained resource, as I suggested above) having the pointer or reference doesn't take an ownership interest in the resource -- either method requires that you *know for certain* that the resource can't be lost for as long as you hold that pointer or reference.

 

You have a couple options for passing resources without using a shared_ptr --

  • As described, used a raw pointer or reference (to the resource or to the shared_ptr) if the resource is guaranteed to outlive the pointer/reference. There's no way to check whether the resource is still valid, you can only assume.
  • Use std::weak_ptr -- this is no lighter than shared_ptr, but expresses different semantics. Instead of making an implicit promise that the resource will outlive it, using weak_ptr says "I have an interest in using the resource later, but only if it still exists and I don't care whether it does. I'll check first." weak_ptr allows you to check, and to take a shared_ptr from it later if you want.



#5191421 Resource management

Posted by Ravyne on 05 November 2014 - 04:03 PM

I would say in general that any resource system which requires a resource to derive from a base class, or for a resource to be wrapped in another class that derives from that same base class is a poor design. Whatever your "resource manager" is, it should simply be able to consume types that represent a resource, whatever they are -- for example, there's absolutely no reason that a script or a localized text resource shouldn't simply be a std::string, but you can't make std::string derive from your base, and wrapping it into a class that does is just worthless boilerplate that causes you to unpack the type that you really want every time you want to use it.

 

Beyond that, your proposed interface already violates the Single Responsibility Principle -- You propose that the interface provides for file-loading and for reference counting, and for--you know--being the resource.

 

On the efficiency of shared_ptr, remember this: The only entities that need a shared_ptr are those that have an owning interest in the resource. In many cases, the place where you need the resource is encapsulated by a scope that has the ownership interest. As an exaggerated example, lets say you pass a shared_ptr to some function that has a recursive implementation -- in your solution, not every recursive call needs to have an ownership interest in the object because the single expression of ownership at the top level ensures that the object is still alive for all recursive calls. Likewise, you don't need to pass smart-pointers all the way down your renderer because there's usually something higher up the call-stack that's already got an owning interest that's guaranteed to outlive those calls. When your ownership interest is already stated like this, just pass a plain pointer to down the callstack.

 

What I have myself is what I call a "resource cache". In my current system I wanted eviction of a resource from the cache to only occur when system resources are low (e.g. evict an unused resource to make room for a new resource that's needed), or when explicitly requested, thus the cache has an ownership interest in resources and they are stored by the cache using a shared_ptr. This shared_ptr is the value in a key-value pair (std::map), where the key is whatever the caller wants on a per-cache basis and is the unique ID of a resource (e.g. an integer ID 1 and 2 are different, even if you would load the same file) -- the mapping between the IDs and file names is a different responsibility and hence a different system (a simple std::map). In this system different types of resources go into different caches. I'm leaving out some details about resource affinity/priority and lazy-loading, but I've been pretty happy with this system for years, and I've not once had to derive from a resource base class or implement a resource wrapper. My implementation is ~400 lines of C++, and that includes a shared_ptr implementation (pre-C++11, remember?) and something like a limited lambda-object for lazy-reloading.

 

Now that C++11 is a thing that's pretty well supported I actually do want to create a further-improved resource cache that would have a better memory layout and support resource metadata to drive some of the other features I have like affinity and lazy-loading.




#5191175 Datatype Size and Struct Compiler Padding

Posted by Ravyne on 04 November 2014 - 02:16 PM

To be truly portable, you need to not write whole structs/classes -- only their constituent primitive types. As a load-time optimization, you can pre-format your stored data to match the structure/class layout of a particular target platform, repeating for each target platform. Keep in mind that one of the variables in "platform" is the compiler version you are using and its settings -- both of which can affect layout and packing.

 

You might also consider something like Google's protocol buffers, but that's a C++/Java/Python solution which might not be viable on your embedded platforms. 




#5191034 someone help me ID this pattern?

Posted by Ravyne on 03 November 2014 - 06:17 PM

Patterns aren't a set thing to be employed -- that is, there is no single "observer pattern" that everyone agrees is the canonical implementation of the observer pattern. Patterns are more like a tool for planning and talking about design problems, and they span frameworks and even languages, hence the lack of a canonical implementation -- it doesn't even make sense to speak of one at an implementation level.

 

Patterns, at an implementation level, mutate and sometimes blend together with other patterns. Here, patterns bend to the problem you are solving, not the other way around.




#5190198 What is a lobby server?

Posted by Ravyne on 30 October 2014 - 02:00 PM

You can think of them as simply being the server where matches are made. In console games, its pretty popular to automate the process of matchmaking so you don's see lobby servers too much. On the PC side of things, traditional lobbies are more popular. Typically you might have a chat-stream like IRC and a list of available servers that shows the game type, number of players, max players and score or remaining game time. You just pick the one you want to join.




#5189989 Finished c++ primer plus and console. Now I want to make a 2D tile game!

Posted by Ravyne on 29 October 2014 - 12:32 PM


Using something like SDL or SFML makes about 10x more sense than Win32.

 

QFE. Either SDL, SDL2, or SFML are the right tool for the job at this point in OP's development. Win32 is ancillary to his stated intent, and GL/DX are suited to accomplishing the task, but much to complicated for such a simple thing -- just setting up and tearing down GL/DX is a large enough task to obscure the simple act of putting some things on the screen and moving them around.




#5189987 My goodness multiple inheritage is like a taboo. Why?

Posted by Ravyne on 29 October 2014 - 12:24 PM

Novices tend to misuse MI as a way of achieving code-reuse without realizing all the baggage it brings along. MI systems, as they grow larger, tend to become brittle and develop warts where unexpected interactions caused a conflict. Worst of all, MI systems become very difficult to untangle once assumptions about the MI structure leak into other systems. They also cause class bloat and layout differences in non-obvious ways.

 

Inheritence should only be used to express 'is-a' relationships (Liskov Substitution Principle) not has-a -- A SpecificCharacter is-a Character, but a SpecificCharacter has-a Sword. You express has-a relationships through composition (the SpecificCharacter class contains a member variable for the sword), sword might be a sub-class of a weapon base class.

 

But in general, flatter class hierarchies are less cumbersome, and data-driven design techniques can keep the layers of inheritance and overall number of classes to a minimum.




#5189569 General Programmer Salary

Posted by Ravyne on 27 October 2014 - 07:56 PM

The biggest problem with these numbers and that survey are that the numbers don't really tell you much, and they also tend to skew statistically high, I'd say, since I believe the survey is self-reporting if its the Gamasutra survey. But its really hard to say anything because you don't know whether the respondants were a specialty programmer (graphics or what-have-you), which demand a marked salary increase over the humble "gameplay programmer" most places. The industry also tend to be a place where there's a significant difference in salary between someone who's shipped at least one commercial game and one who hasn't -- probably because of the relatively-high wash-out for first time game devs in high numbers, vs a much-reduced set of people who have shipped a title and come back for more.

 

I would say, at the very low end, a fresh college graduate at a smaller studio might make a salary as low as around 50k -- a number of years ago I was made an offer similar to that, not long after I was out of school. People who have been around through multiple titles and have a specialty or are very strong generalists can top 100K relatively easily, 120K or more is less common, but not unheard of.

 

But in general, for the skills and hours demanded, the games industry is almost always less-compensated than other places you could be employed as a programmer.




#5189479 Standard Project Structure Layout

Posted by Ravyne on 27 October 2014 - 01:58 PM

First, you can create file-system folders that mirror the project folders ("filters" in VS parlance) so that you don't have name collisions, not that you should anyhow, but it helps for general organization. When you add a new file through the VS interface, just be sure you're creating it in the place where you want it to be, and to use the context menu of the filter you want it to appear under. (right-click, add new-file / add existing-file)

 

Other than that it sounds like you're over-complicating things, and are experiencing paralysis by analysis. If this is your first real C++ undertaking, my recommendation is to not try to build some abstract engine. Yes, keep engine code separate from game code as best you can, but your first project, fist engine, first anything in C++ is not likely to be something you'll do well enough that you won't want to do it over next time. Make your game, make a little-e engine to support it, and only it -- don't try to make a Big-E Engine from the start.

 

 

A typical structure I have for a game project is three folders -- One for game assets, one for game code, and one for engine code. In a simple project, these are usually just folders/filters under a single VS project. In something a little more complex, these are each projects under a VS solution, with dependencies set accordingly to facilitate the correct build order. Game assets include graphics, sounds, music, maps, configuration data, and possibly scripts (I put them here, some people might consider them part of the game code,) I make a folder for each type of asset. Game code includes all code that implements game-specific features or logic (possibly with scripts), if you put scripts here, I'd make it a folder from the start -- for other code files, I don't separate them out into folders by feature area until it seems necessary -- often on moderate-sized projects, it never becomes necessary. Engine code is basically the same, but you generally have a better idea of what broad feature areas exist and can make some broad organizational bins early -- Graphics, Sound, Input, Platform, Math.

 

In general, you should prefer to avoid making deeply-nested folders as an organizational technique. Extra layers of folders can make things more complicated than they need be, if you ever need to make custom build rules. Try to stay relatively flat until you're convinced the extra layers give you value. You don't have to have the right, final structure from the start. Moving things around is just another kind of refactoring.




#5188594 what's the principle behind the shader debugger?

Posted by Ravyne on 22 October 2014 - 02:48 PM


I think that if i created a virtual machine which is support asm shader code, may be i can replay the pixel shader process

 

Yes, that is the gist of it, but its a whole project in its own right. You might be able to piggy-back on an existing software renderer like Mesa , Direct3D WARP or the DirectX Ref device. There are limitations though -- for one, your debugging is only as accurate as your VM's adherence to the spec -- this can get very, very detailed, down to the level of where exactly a texel is sampled during boundary conditions, how floating-point values behave, and implementing 16-bit half-floats in software, along with their conversions to and from more common types. Secondly, it's debugging WRT to a reference, so it can help you make sure that *you* are doing the right thing, but if the driver or device does the wrong thing it won't help you (its still a great benefit to know the problem belongs to someone else).




#5188519 What is bad about a cache miss since a initial cache miss is inevitable

Posted by Ravyne on 22 October 2014 - 08:21 AM

Woe is he who uses virtual dispatch on his hot loops. -- ancient Chinese proverb

Yes, certainly the cult of OOP papers over their ongoing transgressions and its leadership still encourages blind adherence to doctrine that can be harmful. But as Frob pointed out so well, at least you're getting what it is you're paying for. A wiser person would simply avoid virtual dispatch where it wasn't necessary.

Data-Oriented Design techniques, its worth noting, ought to positively impact cache efficacy for both data (increasing spacial/temporal locality, splitting data structures along usage lines) and code (sorting polymorphic collections such that all foo subclasses are processed at once, then bar, and so on).

There's performance to be gained for sure, but you ought to be fairly well off if you haven't done something painfully naive. I'd pretty much exhast optimizing D-cache behavior before examining I-cache behavior (though would design from the start with both in mind).


#5188449 What is bad about a cache miss since a initial cache miss is inevitable

Posted by Ravyne on 21 October 2014 - 09:44 PM

Cache misses are inevitable -- you're speeding up memory accesses by mapping several gigabytes of memory space into just a few megabytes. In other words, cache lines are very limited resources, and every miss that causes a cache line to fill also means evicting whatever data was there before.

 

Instruction cache / cache misses are less of a big deal since the structure of the code itself just lends itself pretty naturally to being cached. For the most part, all you have to worry about is that your hot loops and any code they call out to will ideally fit inside the L1 instruction cache.

 

Data cache / cache misses are a much bigger deal since many data structures are non-linear or accessed randomly (e.g. accessing an array element, or iterating a linked list). When you jump all over memory looking at random bytes, you spill what might otherwise be useful data that might've already been in the cache.

 

Think of it like this -- all your gigabytes of main memory are divided up into little chunks of 64 bytes -- the same size as a cache line. Whenever you read a single byte from any chunk, the entire chunk is loaded into a CPU cache line. Now, most CPU caches are 8 or 16-way associative, meaning that there are 8 or 16 CPU cache lines (a 'set') that can hold onto a particular chunk copied in from main memory, and it also means that every 8th or 16th chunk shares the same 'set' of cache lines. This might sound pretty good, surely you won't be reading from 16 different, conflicting chunks at the same time, right? Well, guess what, You have 8GB of RAM and lets say 4MB of 16-way set-associative cache -- that means that 2048 chunks of main memory are competing for just 16 CPU cache lines. Every 17th new chunk you access, and every new chunk there-after, necessarily boots some other chunk out of the cache -- In the worst-case scenario, it booted something you'll need again soon, but your access pattern continues in a way that causes it to again be evicted, repeatedly; causing data to ping-pong back and forth between cache and main memory. You only get the benefit of the cache if you continue to access more bytes from that cache line.

 

And not only does it take a long time to load something from memory, but main memory *bandwidth* is also a limited resource. I did some back-of-the-envelope calculations using DDR3-1866 on a modern x64 platform as an example. If your game runs as 60 frames/second, and had absolutely pathological access patterns (that is, one byte is read from a cache-line and that line is never touched again), your memory bandwidth and cache lines conspire to limit you to touching less than 8 megabytes of code and data per frame. 8 megabytes is not a lot! With fully-optimal cache patterns, you can access about 500MB of code and data per frame (and you can read/write it multiple times for essentially free as long as it stays in the cache), which is a world of difference. Even still this is not a lot of data, and is a big part of the reason why some games run different parts of their simulation at different rates (for example, you might run simulation logic at 30fps, or only run 1/4th of your AI entities each frame.)




#5187551 DLL-Based Plugins

Posted by Ravyne on 16 October 2014 - 07:56 PM


I put the try-catch block there in case there is an exception because I don't want the Editor to crash if the Game DLL causes an exception. Would this be considered as exception handling across binaries?

 

It would. If you were to stick to native plugins and DLL architecture, one approach is to marshall exceptions across the DLL boundary. A simplistic way of doing this is to surround your plugins in a catch(...) so that no exception leaks from the DLL (you can also catch and handle specific exceptions in the same place) -- then, just have your DLL function log the exception data as a dump, and pass back an integer value that indicates the type of the exception. Your client can look up the error code and present the user with a basic description of the exception, and point them to the log for more information -- if necessary, certain error codes might mean for your client to throw the indicated exception itself, as a means of propagating the error to where it can be handled correctly.




#5187546 DLL-Based Plugins

Posted by Ravyne on 16 October 2014 - 07:45 PM

I think, though, what everyone is trying to warn you off of, is that native DLLs aren't straightforward. Its not easy like implementing an interface -- its like that, except that the interface *also* consists of a dozen or three sort-of-hidden settings and dials that all have to be exactly in sync -- the right calling conventions, the same ABI, the same behavior settings for floating-point code (potentially), the same data packing on shared structures (or a mutually agreed upon serialization). A dozen compiler settings you probably never think to touch suddenly become critically important. In general, I'd stick my neck out as far as saying that native-code DLL plugins are largely out of favor -- people will put up with it where performance is a necessity, but it seems like too much trouble when its not.

 

[EDIT] I realize now that the post above was probably in response to nypyren, and that you weren't justifying sticking to native DLLS in the face of my previous post. I'll leave this here for posterity, but apologize that the tone might be a bit harsh in retrospect.






PARTNERS