Jump to content

  • Log In with Google      Sign In   
  • Create Account

frob

Member Since 12 Mar 2005
Offline Last Active Yesterday, 11:42 PM

#5306895 What's a room?

Posted by on 20 August 2016 - 11:03 AM

It is not as bad as you're making it out to be.

 

 

 

As shown in the picture, a room is a set of walls, which are impenetrable navigation objects or physics objects. Most physics engines make this easy, walls are just a rectangle or box added to the level/room.

 

Moving between rooms is a trigger area in the doorway. It does not have a visual component, just a collision area. Collision with the trigger causes the next room event.

 

As for them needing identifiers, EVERYTHING needs identifiers. As for them needing to be created/destroyed or loaded/unloaded, EVERYTHING needs to be loaded at some point. You will need to load the room, but that includes everything: the walls, the monsters, the keys/items, and whatever else you've got in your room. Level loading is bog-standard functionality you'll need in everything with a level, from what bricks to display in breakout, to the blocks and platforms in classic Mario games, to all the rocks and obstacles in an MMO. 




#5306835 Multiplayer Web Game - Is SQL fast enough?

Posted by on 19 August 2016 - 08:37 PM

"Fast enough" depends on your needs.  A SQL database works well for persistent data, but is terrible if you need things more than once. Load it, use it for a long time, store periodically as you modify the data.

 

Simple SQL database operations that hit the disk are usually on the order of about 10ms per call. It can be much faster if it doesn't need to hit the disk, it can be much longer if it is more complex or requires significant data.  For example if you only need simple values from a single table based on an indexed key on a fast machine with SSD storage, you'll likely be a single-digit milliseconds.  But if you are searching for aggregate data or using a non-indexed value and need a full table scan from a 200 gigabyte table on a slower spindle disk, you'll be waiting quite a while.  

 

For an interactive game, that is about equivalent to a full graphics call, 60 frames per second = 16ms, a bit less once you account for the overhead of other tasks.  However, if your game is involving a whole HTML page being requested and loaded and processed that task is usually on the order of 200ms-500ms, so a few database calls are just fine.  If you need something intermediate from the two, such as server processing, there are systems out there that cache data access so you only incur the full cost of database reads the first time they're encountered and not in the cache. 

 

Without knowing quite a lot more about your game needs and your architecture, an "it depends" answer is about the best you will get.




#5306834 Handling multiple "levels" or "scenes" within a world

Posted by on 19 August 2016 - 08:27 PM

It depends quite a lot on how your world system operates and what your engine supports.

 

 

Most projects I've worked on have used a scene or world hierarchy of sorts. 

 

You've got nodes in your hierarchy.  The basic world leaves them mostly empty or filled with proxy or placeholder objects for the true content.  This might be individual zones or lots or coordinate regions in a large world. This might be an arbitrary root world node that can have child nodes attached. When the time comes to load content into that area, you create a new node hierarchy, load all the data into it, and then attach the hierarchy by swapping out the proxy/placeholder and inserting the full bundle.

 

Regarding your confusion about dealing with copies and proxies of unloaded objects, that is resolved readily enough by using persistent ids and decoupling the representation of the data in the simulator from all the other data like models and textures and audio and animation and effects and world information. 

 

For your example of a tree, the actual tree data is a tiny bit of data about the tree health, possibly a flyweightpiece of data.  That tiny piece of data indicates that it is a tree and that has a health level. The choice of model to display would invoke your tree renderer.  Perhaps every tree is little more than world coordinates, an index of the type of tree, and a value for the tree health.  These 64 bits of data per tree (or maybe even less) are small enough that you can have hundreds of trees visible in your world a few kilobytes of data. 

 

You've put out a few options for ways you might save them, and they each have their own pros and cons. They might work well with the engine and tools you are using, or they might not. Your description of the scene being little more than a list of entity ids is essentially how many games do it.  When it is stored to disk there is often one set of data that contains the scene hierarchy as a collection of iDs, and another set of data that contains whatever was inside the ID.  Such a system lets all the items get persisted and replaced with their IDs as they are being written out; it ensures that if an item is a clone with multiple instances in the hierarchy then the contents are only written out once.




#5306688 Some problems with ECS design

Posted by on 19 August 2016 - 04:38 AM

Personally I'm in favor of having a virtual update(ComponentStorage& components, float dt) method for every system type, where ComponentStorage holds the components in arrays organized by component type. The systems can be viewed as operators on the state of the components in the engine, where on each frame the internal/external state is updated according to the elapsed time. Relationships between systems can be implemented by giving a system pointers to the other systems it depends on during initialization (where the concrete types of the systems are known). There's no need to shoehorn everything into the update method, systems are free to have other methods that perform other actions (like being notified of added/removed components).


Beware the pattern of virtual functions for all the things. Especially beware of actually calling all the virtual functions on all the things.

That is a pattern that will quickly destroy all performance. You pay a cost for every object you create even if you never implement the behavior.

You can provide virtual functions if you'd like in your interface, but if you do so take care that you don't actually call them if you don't need them. Otherwise you'll be calling hundreds, maybe even thousands, of unnecessary virtual functions. It starts with just one, but soon Update() isn't enough, then you'll add both PreUpdate and PostUpdate(). You'll end up with PreRender() and PostRender(), PrePhysics() and PostPhysics(), and probably more besides. Before long every object has a collection of virtual functions that are called all the time but do absolutely nothing.

I've seen it before on engines I've been brought in to help repair. It is a nasty pattern because it is deceptively appealing. It is easy, right? Just make a virtual method that everybody can implement or ignore with the base functionality. But when it is done the result tends to be that you are burning all your cycles on virtual dispatch to empty functions.

The best pattern tends to be to only call the functions on objects that actually want and need updating. That tends to be through registration. Alternatively it can be through introspective, reflective, or dynamic patterns in languages that support them and they are fast, but as this is tagged as C++, those later options don't really exist.

A less good pattern, but still far better than a ton of useless virtual function calls, is to provide a way to not call virtual functions by using a non-virtual test before the call is ever invoked. For example, a bool in the base class that gets changed in the base call that disables calling the function in the future. For example, in the non-virtual inline base class call you have: if(hasFeatureCall) { MyVirtualFeatureCall();} Then in your base class, MyVirtualFeatureCall() { hasFeatureCall = false; } You're still paying a penalty for every call on every object, but that penalty is far less than jumping out to a virtual function, finding it empty, and returning.


#5306580 Some problems with ECS design

Posted by on 18 August 2016 - 11:52 AM

It looks like these are all premature attempts at optimizations.  The things you are discussing are not problems in the real world.
 

1)fast (no vtables and random memory access)

 
If you need virtual dispatch then vtables are currently the fastest available way to implement that. You're going to need some way to call the function.  
 
On x86 processors since about 1995 virtual dispatch has approximately zero cost, the very first time they are accessed the CPU caches them, and assuming you touch them occasionally they'll stay on the CPU. Since you should be touching them around 60+ times per second, they CPU will happily keep them around for zero cost to you.
 
In other words, you say you don't want vtables as you think they are not fast; but vtables are the fastest known solution to the task.  Use them.

 

  
 
As for random memory access, unless you can somehow organize your world and scene graphs so data traversal of components is linear, you'll need to live with some of that.  Be smart about it so the jumping around lives in L2 cache.

 

Random memory access can be amazingly fast, or it can be tortuously slow. The only way to know is to run it on a computer, profile it with cache analysis tools, and determine how your real-world memory patterns are working.  
 

2)cache-friendly

 
 
While you have clearly minimized size in your data (good), you have decreased cache friendliness by making then non-contiguous.  You have a continuous array of structures. The cache is generally most happy with a structure of arrays; parallel instructions are best when operating on a batch of elements at once rather than operating on a single item alone.

 

There is far more to cache friendliness than size of the data. The only way to know for certain how your program interacts with the cache is to run it with cache analysis tools to determine how your real-world memory patterns are working.
 

1 Component = 1 System

I need a new component + system

 
 
You keep using the word "system" in a way I'm not familiar with.  A system is any group of connected things. Any time you take any series of actions with any object you have created a system.  The interfaces you create define how the system is used.
 
Did you create a process or class or structure and give it a name "system"?
 

Intersystem interaction occurs via messaging

 
 
This can work reliably, but the thing most developers think of with messaging tends to add performance overhead, not remove it.
 
If you need to make a function call on an object then do so, that is the fastest way.  Going through a communications messaging service adds a lot of work. 

 

There are many excellent reasons to use messaging services: allow extension by adding message listeners, resolving threading issues and non-reentrant code, and processing load balancing are a few of them.  Faster execution time is not one of those reasons. 

 

1. How can I ensure the safety of pointers? std::vector can break my pointers while resizing. (P.S. I don't want have array with a static size)

 

If you use any type of dynamic array and you add or remove items, you cannot avoid it.  If you want the addresses to remain constant you cannot use a dynamic array.  I suggest learning more about fundamental data structures.
 
Some other options are to use a different structure (perhaps a linked list style) or to store a reference to an object (such as a container of pointers).  This will break your continuous access data pattern, but give you stable addresses. You need to decide which is more important.
 
Alternatively you can design your system to not store addresses of items, to work on clusters of items at once, and to not hold references to data they don't own.

 

 

 

 

 

If you really are concerned about performance you need to start up a profiler and look at the actual performance. You need to measure and compare with what you expect, you need to find the actual performance concerns. Performance is not something you can observe by just looking at the code alone. You need to actually see how it is moving through the computer.  Generally the best performing code looks complex and is larger than you first expect; the solutions that are simple and small tend to perform poorly as data grows because they don't fit the CPU's best features.

 

The things you are mentioning are tiny performance gains by themselves, and if you do have any performance concerns on your project they're not coming from these choices mentioned in the thread.




#5306474 Ecs Architecture Efficiency

Posted by on 17 August 2016 - 11:04 PM

Based on all these updates, I'm taking the broader question to be "How many items can I stuff in an ECS game system before it slows down?"

 

 

I've worked on systems with well over five thousand articulated models on screen with fully simulated game objects at once before the processing power started to bog down.  I've been brought in on contract to help a project with under 200 static model on screen that could barely maintain 30 frames per second on mainstream hardware. And I've worked on about 20 projects that have ranged far between.

 

The choice to use an ECS game system has absolutely nothing to do with those performance numbers. 

 

 

BEGIN TEACHING MODE:

 

 

The biggest determining factors in performance is how you use your time.

 

Steam Hardware Survey says about half of gamers today (46.91%) still have 2 physical cores, and they're about 2.4 GHz.   So if you're targeting mainstream hardware, you get about five billion cycles per second if you use them all.   Each cycle takes about 0.41 nanoseconds, but we'll call it a half nanosecond for easier math.

 

You lose a big chunk of that to the operating system and other programs. Let's call your share about 4 billion per second, or about 66 million processor cycles per frame. What you do with those cycles is up to you and your game.

 

Some tasks are extremely efficient, others are terribly inefficient.  Some tasks are fast and others are slow. Some tasks can block processing until they are done, other tasks can be "fire-and-forget", scheduled for whenever is convenient for the processor.  Sometimes even doing what appears to be exactly the same thing can in fact be radically different things that you didn't know about, giving very different performance numbers for things you didn't think about. 

 

 

 

 

 

The most frequent performance factor, and usually the easiest to address, is the algorithm chosen to do a job.

 

There are algorithms that are extremely efficient and algorithms that are inefficient. As an example, when sorting a random collection of values the bubblesort algorithm is very easy to understand but will be slow.  The quicksort algorithm is harder to understand but will typically be fast.  And there are some more sorting routines out there like introsort that are quite a bit more difficult to implement correctly but can be faster still.

 

You can choose to use a compute-heavy algorithm when the program is run, or you can change the algorithm to use some data processing at build time in exchange for near-instant processing or precomputed values at runtime. Swap the algorithm to bring the time to nothing, or nearly so.  For example, rather than computing all the lighting and shadowing for a scene continuously, an engine may "bake" all or most of the lighting and shadowing directly into the world.

 

You can often choose to switch between compute time and compute space, similar to that above. Precomputed values and lookup tables are quite common. In graphics systems it is fairly common to encode all the computing information into a single texture, then replace the compute algorithm with a texture coordinate for lookup. Textures for spherical harmonics are commonplace these days; even if artists don't know the math behind them many can tell you how "SH Maps" work and that they improve performance.

 

Sometimes it is clear to see places with multiply nested loops, places with exponential computational requirements, code that has known-slow algorithms with known-fast alternatives.  And of course, the fastest work is the work that is never done. 

 

So you may have an algorithm in place that has n^3 growth.  With 5 items it may take 60 nanoseconds, and that's great. With 10 items it may take 500 nanoseconds, that's fine.  With 100 items it takes 500,000 nanoseconds, and that is not fine.  Swap out the algorithm with some that takes a bit more time per value but has linear performance and those times may become 180ns, 375ns, 3750ns, and all of those are great with a different algorithm.

 

 

Algorithm performance sometimes may be reviewed in the source, but other times they may require analysis tools and profiling.

 

 

 

 

 

After algorithm selection, one of the biggest performance factors in games is data locality.  It has very little to do with ECS, although some ECS decisions can have a major impact on it.

 

Basic arithmetic from data already available to the CPU can be done quickly. Processor design allows multiple operations to take place at the same time in internal parallel processing ports, so a single basic arithmetic operation can take place in about one-third of a CPU cycle, or about 0.15ns per operation.  If you are using SIMD operations and the CPU can schedule them on ports in parallel, it can take one-sixteenth of a CPU cycle per value, or about 0.03ns. Those are amazingly fast, and that is why so many programmers talk about ways to leverage SIMD operations, which you might have heard of under the name MMX, SSE, or similar.

 

But there aren't many registers and L1 cache lines on the processor, and reading from memory is slow. If the data is in L2 cache there is an overhead of about 7ns or about 20 cpu cycles.  If the value is in main memory it takes about 100ns or about 240 CPU cycles.

 

Cache misses (needing to get something from farther away in memory) and cache eviction (not using what is already in the cache) can completely destroy a game's performance. Jumping around all over memory might not be a bad thing, what matters is cache performance. If you are jumping around all over memory but it is all on the chip's L1 cache it is amazingly fast, jump around on data in the L2 cache and performance drops by a factor of about 100.  Jump around on data requiring loads from main memory and performance drops by a factor of about 10,000. 

 

ECS systems tend to jump around frequently, but design of the systems can mean it is jumping all over in L2 cache or jumping all over in main memory. It is a fairly minor design change but it makes about a 10x performance difference.  

 

Two systems that look exactly the same can differ by an order of magnitude in performance based on data locality.  Even the same system can suddenly seem to switch gears from fast to slow when data locality changes.  You cannot spot the differences in data locality performance by reading the source code alone.  

 

 

 

 

Another major performance factor in games is how you move data around.  

 

You need to move data between main memory and your CPU, between both of them to your graphics cards, to your sound cards, to your network cards, and to other systems.  The system bus performance depends quite a lot on the hardware. Cheap motherboards and bad chipsets can move very little data at a time and has slow transfer rates. Quality motherboards and good chipsets can move tremendous amounts of data at a time with rapid transfer rates.

 

While you probably cannot control the hardware, if you know what you are doing you can coordinate how data moves around.

 

You can send data around from system to system all the time with no thought or regard for size or system effect. This is much like the highway system: sometimes you have near-vacant roads and can travel quickly, other times you'll have tons of cars saturating the road with all the vehicles sitting at a standstill. Your data will eventually get there, but the performance time will be unpredictable and sometimes terrible.

 

You can take steps to bundle transfers together and take simple steps to ensure systems don't block each other.  This is much like freight trains: huge bundles with cars extending for one or two miles. There is some overhead, but they are efficient.

 

Or you can take more extreme methods to highly coordinate all your systems and ensure that every system is both properly bundled and carefully scheduled.  This is like mixing the capacity of long freight trains with the speed of bullet trains: enormous throughput, low latency, and everything gets moved directly to the destination with maximum efficiency.

 

Like memory performance, you cannot spot the differences in bus usage performance by reading the source code alone.

 

 

 

 

 

 

There are many more, but those are normally the biggest impact.  

 

These factors by themselves will account for the vast majority of the performance characteristics of an engine.  A few minor differences in each of those things mean the difference between a game running at 10 frames per second or running at 100+ frames per second.




#5306083 Initialize your goddamn variables, kiddies

Posted by on 15 August 2016 - 10:27 PM

the fun cousin of that demon, the denormal numbers, hiding in the infinitesimal gap in between zero and "smallest possible float value".

 

Yes, "Fun".

 

When dealing with numbers in games with a 1 meter scale, a good sanity test is: "Is this number less than the width of a hair or fingernail?"  Any distance smaller than around 0.0001 generally ought to become 0.




#5306023 Is The "entity" Of Ecs Really Necessary?

Posted by on 15 August 2016 - 02:00 PM

There's no reason you can't do both; have logic in components that only operate on itself, and have systems that operate on more than 1 component.

 

So much this.

 

Just making up component names for these examples, but it is something I've done hundreds of times over the years.

 

 

 

It is straightforward and easy to write a component that hooks up to various commands and then only manipulates itself.

 

But for interplay, you could write something that detects another component before working.  All the systems I've worked with have had a way for components to identify their containing parent in the hierarchy, and also provided functions to scan for contained components, either as direct children or as nested elements.  Perhaps you'll build a component scans it's parent for a PhysicsObject component as a direct child. 

 

Or maybe you write a mesh deformation component that gets it's parent, then requests all Mesh components from the parent. You may want to require the parent object contains exactly one Mesh component, and logs errors every time it is called and the attached components aren't found.

 

Or maybe you want to write an AI component that searches the parent for both an AiController component that you wrote and a AiLocomotor component. You can then derive your own specialized AiControllers or AiLocomotor types from interfaces, check that there are exactly one component implementing those interfaces attached, then either log the error or run the code using the detected components.

 

This type of automatic connection can be an alternative to directly-specified values. That alternative is to have a system that injects the target. You might have public methods to get and set the target component which validate that the component meets the interface you're interested in.  Then the target can be injected by the game engine at load time, or be injected by your own code that replaces the component while running, or some other means. Automatically detecting makes it slightly easier for anyone manipulating the world and slightly less error prone in case someone forgets to specify the target.




#5306000 Good C++ Learning Materials For A Beginner

Posted by on 15 August 2016 - 11:18 AM

I'm seeing how unhelpful the book is.  It's a shame I have to use it for this coming semester too.

 

That's actually a good thing.  The more you code and books you are exposed to the more you can learn.  As you gain experience you'll see some patterns that work well and some that are poor or ineffective, these days they are called anti-patterns.

 

Learn from as many sources as you can.  

 

The biggest benefit of academic studies is that it forces you to be exposed to ideas you may not normally want to study, yet it only gives the shallowest exposure to the topics. You should do as much learning as you are able on your own. 




#5305837 Handling Items in RPG

Posted by on 14 August 2016 - 08:12 PM

Here is a discussion from two weeks ago, with several different examples of ways to do it.




#5305835 Use of user journeys in game development

Posted by on 14 August 2016 - 08:09 PM

I know we've never used them, at least to my knowledge.

 

From the description, it looks like the kind of thing instructors put in to try to prove that a concept is used in industry where you look for something that doesn't exist, find things that somewhat match what a person might have done, and thereby establish to a young impressionable student that it is actually industry standard.

 

 

There are times where people describe what a user might experience. It is good for products to develop a series of statements about a product. There is the single statement about what your product is. There is the "elevator speech", fast enough you can give it when stuck in an elevator and someone asks "what do you do?", typically from 20-60 seconds. There is a two-minute version. There is a two page version. And there is a 10 page description or overview or pitch.

 

All are quite different from "user stories", which are descriptions of tasks or features inside a program. Like "As a user, I can pick the 'game options button' and be shown the features for the game, including this and that and the other."




#5305586 Interfaces and code duplication

Posted by on 12 August 2016 - 10:57 PM

As this is For Beginners, trying to stick with answering your actual questions before the educational aspects...

 

 

 

Every Java resource I've read goes crazy on the idea of interfaces, and uses them for almost everything. What I don't get however, is how they are better than plain old inheritance.

 

 

Because as far as software architecture goes, it is a much better design in general.  There are some specific cases where that isn't true, but in the general case you should depend on abstractions rather than concrete types.

 

The are better than "plain old inheritance" because depending on an abstraction means you can add whatever implementation details you want later and they automatically fit into the system, they just work.

 

An example I frequently give is with graphics systems.

 

Programmers rely heavily on the abstract classes in Direct3D yet they don't care one bit about the concrete types.

 

I request an ID3D11Device* and it just works.  I can do all the things I want to with that device.  I don't care if the device driver actually implements it as an ID3D11GeForce620, or an ID3D11Radeon6230, or an ID3D11IntelHD4400.  The only thing I care about is that it does all the things an ID3D11Device is supposed to do.

 

I can use my object, no matter what the final concrete type happens to be, and it should work perfectly.  Every concrete type is completely replaceable with any other concrete type.  It does not matter which one the system provides.  If I obtain something that implements an interface it implements it perfectly and could have been interchanged with any other object that implements the type.  That doesn't mean they perform identically, different graphics cards from different eras may perform faster or slower, or may perform different operations at different rates, but (barring driver bugs) they will always implement every operation exactly as specified, and you can do all the operations on any card that follows the interface.

 

I shouldn't ever need to write code that detects what specific device it happens to be and write special-case code for that concrete type; instead I can use the provided interface and decide based on that.  Some functionality may differ between cards but I don't need to hard-code every single card that can support the feature, I can rely on the interface to query if the feature exists by getting device capabilities.

 

That last point also allows future-proofing.  If I write my program that works on all of today's cards, and sometime in the future someone introduces a new card with different capabilities, it just needs to implement the right interface and it drops in perfectly.  If it provides different capabilities they must be queried exactly the same way the interface allows for querying capabilities.  

 

I don't need to modify today's program to support a new graphics card that ships three years from now, it will still work perfectly because it follows the same interface and is completely interchangeable with any other graphics card that also implements the interface.  By following the interface everything works perfectly right out of the box.

 

 

 

 

 

 

 

For another example, let's say I implement an event bus in my system. I want to be able to broadcast "Here is the event" and have everyone listen.  I don't care what systems are listening, they can be logging systems, they can be networking systems, they can be debuggers, they can be game event handlers, they can be anything else.  All I care is that they implement the interface called IEventBusListener.  If someone six months from now, or six years from now, decides to implement some code that implements the IEventBusListener interface they can add it to the code base and it will work perfectly.

 

For another example, let's say I'm working on a game and I want to find the nearest thing in the map that is a weapons spawner.  The foolish way to do that is to look for a specific list of weapons spawners that the developer happened to know about when the code was written.  If sometime later somebody adds another type of weapon they need to search for all the places in the code that relate to the weapons and hope they update all of them correctly.  It is impossible for someone to come along and add a new spawner type without modifying all the source code.  The smarter way to do it is to provide an interface, perhaps IWeaponSpawner, and search the map for anything nearby that implements that interface. If sometime later someone adds another type of weapon in the world it is automatically hooked up and completely integrated into the system.  (This works just as well for component based or ECS systems, some happen to use a slightly different identification method rather than inheriting an interface, but the overall design is identical. Look for something that implements the component, something that implements the interface, something that claims it does the action, rather than looking for a specific list of items that you happened to know about at implementation time.)

 

 

When you write code, always write code that depends on the abstract types, on the interfaces.  Only use the operations provided in the abstract types, on the interfaces. This helps all development down the road and new code can just drop into place.

 

 

 

 

Say I have a class GameObject, being the abstract base class for everything in my game. Something in the game might be a visible physics object, so if I want an object that does that using Interfaces, I would write a Renderable interface and a Physics interface and implement both. But say I have an object which doesn't use physics. I would have it implement just the Renderable interface.

 

You could do that, but most systems don't.

 

As mentioned above, this is a HAS A interface. This is composition, not an interface.

 

A game object does not satisfy IS A.  Trying to replace them doesn't work.  Swapping it out, a game object IS A model doesn't make sense.

 

A game object HAS A model.  Or maybe it doesn't.  Or maybe a game object has several models.  Maybe there are models for "Full", "2/3 Full", "1/3 Full", and "Empty".  Maybe there are models for "Base Started", "Base 10% Complete", "Base 20% Complete", "Base 30% Complete" ... "Base 100% Complete".  Maybe there are models based on damage taken, or based on state of repair.  You might implement an interface that says your object potentially has a model and request a reference. 

 

Again, your game object does not implement the interface to be rendered. It is not a thing to be rendered. You should not be able to swap it out with any other thing that could be rendered.  It may expose an interface to provide a reference to a renderable object, but it is not renderable itself. 

 

 

 

Similarly with physics, a game object HAS A physics shape.  Or maybe it doesn't.  Or maybe the game object -- a character -- has a capsule physics shape when running, a sphere physics shape when squatting, and no physics shape when deceased.  Once again, you might implement an interface that says your object potential has a physics object and request a reference to it, but it is a HAS A rather than an IS A relationship.

 

 

 

Why are Interfaces used instead of just Inheritance alone?

 

 

There are a set of programming principles under the great acronym SOLID.  Through a bunch of trial and error, people have discovered that when you implement these principles in your code designs it goes a long, long way in keeping your code maintainable. It helps reduce bugs. It helps make it easy to swap out systems, to extend systems, to reuse systems.

 

Interfaces enable several of those principles.

 

Yes, interfaces have a small cost of a few nanoseconds when they are followed.  If there is only one concrete type and there will only ever be one concrete type you should consider just using the concrete types directly rather than using virtual dispatch as interfaces or abstract types use. For Java, that means marking this as final.  The O in solid, the Open/Closed principle, means to keep things open for for extension (meaning you can make new behaviors and extend on it but it is also closed that it must implement exactly the behavior of the base or abstract type, and be completely interchangeable with it.  If you are not following this rule, when something is closed to both, make sure you make it completely concrete, and in Java that means marking it all over with final so you're not paying the cost since java is virtual by default.

 

However, any time you want replaceable parts -- and you almost always want replaceable parts -- it is the cheapest method available because every system out there has heavily optimized the pattern.  

 


Use interfaces or abstract base types or whatever they are called in the various other languages when you need it, and you almost always need it.  They are a great solution to many problems that every experienced programmer knows and every tool recognizes and makes faster for you.




#5305422 Estimating development time

Posted by on 11 August 2016 - 09:07 PM

I need tips on how to better estimate development time for games.

Experience and training.

The way I most improved my estimation was being the only programmer on the team. I estimated all my programming tasks and was held accountable -- via extra hours -- for my estimates.

If you aren't accountable for your estimates in a very real way it is difficult to be motivated to improve them.

Also if I were to give you a project like this, how long do you it would take to develop it from start to finish? Assuming there is only one programmer and one artist and the game is being developed in Unity.

Features are very terse, there isn't enough detail for good estimates.

Endless runner, IOS & Android, Unity engine, single player. By itself if you know what you are doing and want to make a potentially viable commercial product, I'd say bare minimums of 4 months programming and 3-4 months art. That's for a terrible product that cannot compete commercially but at least is not embarrassingly bad on the marketplace. If you aren't already experienced and don't know what you're doing I'd say at least 8 months programming (or far more, depending on the programmer's skills) and 4-8 months artwork, since the artist may have to rework things many times.

Online cross platform multiplayer for 4 players is going to add about two months programming if you really know what you are doing and your app is built for it up front, easily six months or more if you don't or the app is wrongly designed for it, and you'll need to include QA because bugs are hard. Artwork is UI, so far less time, perhaps a few weeks.

The random powerups are not that difficult, maybe another 1-2 weeks programming and whatever time is needed for art depending on visual complexity, perhaps a few more weeks if you have many diverse powerups or they do different things.

The link to facebook could be fairly simple if it is just posting directly, there are components you can buy that do almost all the work, add two weeks if you're using one of those, one if you've used them before and know what you're doing. Otherwise add in about a month, or two if you are learning how to post everything for the first time. If you want more than that, maybe full facebook integration and all that jazz you can add a full development year or even multiple development years.

Same for twitter. If you're just posting, and using a plugin, or just posting using their direct API, about the times mentioned above.

In-game chat basically comes for free with the other networking. Maybe 1-2 weeks for UI if you know what you are doing, more time if you have to learn how to handle UI and text input.

Friend invites, friends list, and join-in-progress all mean trouble unless you've got a lot more infrastructure than you are hinting at. Far beyond the scope you're mentioning.

Global leaderboards are a huge risk, I wouldn't touch them. The simple solutions are the most hackable, and they tend to become cesspools.

Having in-app purchases itself isn't too bad but you didn't list what it is that is going to be purchased. If you use a good cross platform library you won't need to write anything yourself, just hook in to a cross-platform entitlements system.

Ads will depend on the types of ads you use and the libraries they provide. Anywhere from one month to six months to implement, depending on your skill and experience.



The bare-bones "I can show my friends I made a game" endless runner could be done in about a month of programming work, about the same in art. The "I made a product that has the tiniest chance of being commercially viable" is anywhere between one to three years of programming work and anther year or so in artwork.

If that wasn't enough, the market for endless runners is not favorable to newcomers. The niche is saturated and even an amazing product is unlikely to succeed.


#5305358 Life as a Tools Engineer ( in game development )?

Posted by on 11 August 2016 - 01:08 PM

I have reached a point in life where the sort of hours you work in this role aren't meshing with my family life.
 

 

Stop working those hours.  It depends on the company, but from those I've worked at they are not required by management, but instead worked quite voluntarily by those on the team. I've been on teams where a few people run themselves ragged with extra hours and stay late into the night, and others on the team work exactly a full day and leave at 5:00 on the dot. In such environments there are engineers who (mistakenly) believe that the entire company success hinges on their putting in unreasonable number of hours.  If management truly is demanding you work more hours than are standard for your nation, change companies.

 

The last company I worked with the entire office of several hundred people was typically vacant around 4:45, those who remained were finishing up their daily tasks. It is entirely cultural, and sadly many studios operate in startup mode, taking on the behavior of the company founders who are fully invested in the company rather than a worker at the company.

 


1) see if anybody here has been/is currently a Tools Engineer in the Games Industry?  

2) What is the day-to-day like?  

3) What are the projects like?  

4) What are the hours like?  

5) I have worked with Tools Engineers in the past, and I am curious about the role.

 

1) I've worked on tools teams at two companies.  Many others here on the site have worked and are working in the role.

 

2) Day to day it is a programming job.  Instead of gameplay tasks like "write a script for weapons spawners" it is tools tasks like "write an importer for Collada files".  The exact needs depend on the team's needs.  They may need stuff for the engine, stuff for content pipeline, stuff for build systems, stuff to make various people's lives easier.

 

3) Every project is different. I've built image compositing tools, tools for processing resources for Nintendo DS, art tools to help the artists view their assets faster, engine tools for screen layout, engine tools for data parsing, pipeline tools to help the build system, tools to help designers modify data more easily, tools to extract game statistics that are run automatically and appended to a file to generate trend lines.  Lots of tools out there.

 

4) Hours are whatever you let them be, as above.  A small number of companies are abusive, don't work there.  Most companies have sane core hours, many in this industry are more than willing to let foolish young workers abuse themselves with late hours if the workers wish. Pay attention to detail of what management says, if the core hours are 9:00 AM -4:30 PM, you shouldn't routinely be at the office at 6 PM or even at 5:30 PM.  Yet I've been at companies with those core hours and a few people still put in 9-hour, 10-hour, even 12-hour days. Mostly those were young, single adults with no real life to speak of.  

 

5) If you've worked with them you should basically know the role.  It isn't too radically different from gameplay engineers, it is still writing code to solve problems, just uses different topical areas of code and different software libraries to accomplish it.

 

I know that their schedules are often different than the rest of the team, since they are in a support position.

 

That depends entirely on the company and teams.  Nearly every company I've worked at everyone has had the same core hours, except for QA who often arrives a few hours later in the day and works the same number of hours later in the evening.  

 

Those rare teams that had different hours usually worked with other remote groups in different time zones.  When you've got teams on US Eastern Time and British Time, the US people arrive early and the UK people may work later.  When you've got US east coast and US west coast teams the west coast people may need to arrive at 8 AM, the east coast may not arrive to the office until 10 AM.  




#5305336 Confused about 3D game dev

Posted by on 11 August 2016 - 10:15 AM

now im just confused to start which one?

 

 

Any of them will work.

 

 

 

also don't know basics of 3D gamedev ... can use only free resources on internet 

 

 

In my view, the biggest thing you need is the math of 3D worlds.

 

When you are in the 2D world you need 2D points (x,y), you need 2D vectors of (dx,dy), you need orientation which is typically an angle from a top-down view.  The mathematics to work in the world are elementary algebra, discrete trigonometry, and geometry.  That is, the math of elementary algebra to manipulate formulas and equations, enough trigonometry to handle the basic trig functions to manipulate the world, and enough geometry to handle things like circle/circle intersection, circle/line intersection, etc.

 

When you are in the 3D world you need 3D points (x,y,z), you need 3D vectors (dx, dy, dz), you need orientation which is typically a 3x3 or 4x4 matrix from the world's origin.  The mathematics to work in the 3D world are linear algebra, algebraic trigonometry, and topology, in addition to the math of 2D worlds.  That is, in addition to the math of 2D worlds above you need linear algebra to manipulate matrix and vector equations and higher dimension coordinate systems, enough algebraic trigonometry to manipulate polynomials as they operate in 3D, and enough topology to understand how to work with polygonal meshes in 3D space.

 

If you're looking for a good, free, online textbook consider this one.  If you're looking for tutorials and videos, Khan Acadamy covers many of those topics.

 

When you are starting out, many people are able to hobble along without actually knowing that math. They will try to find math functions that give them their forward vector and up vector and similar without actually knowing the math involved, they will try to figure out what is forward-facing or backward-facing and with some luck stumble upon a web site that explains it. They will look at the programs that show them model meshes and try to figure out the names and words and manipulations through trial and error.  Some people are able to get started that way, but ultimately you will need a solid understanding of the math to succeed.

 

 

i use C++ for game dev, do you think i need more languages in future or i should start to learn more languages right now like c# or java?
 

 

Any of those will work.  C++ is adequate, and Unreal Engine uses it.  C# is adequate, and Unity uses it.  If you only know C++ then Unreal Engine may be a better fit for you right now.






PARTNERS