Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


frob

Member Since 12 Mar 2005
Offline Last Active Yesterday, 09:57 PM

#5189624 Questions from a newcomer

Posted by frob on 28 October 2014 - 01:11 AM


My goal is to be able to create/develop 3D video games similar to San Andreas. So let's say I wanted to create Grand Theft Auto: San Andreas... where would I start as a complete beginner?

 

The last few GTA games have been measured in fractions of a billion. So start out with a quarter billion dollars, then hire several hundred experienced developers...

 

The For Beginners FAQ has already been mentioned. The Breaking In FAQ is also a bunch of excellent reading.

 

The very brief summary is to figure out what you want to do and build a path that is unique to you alone.  If your interest really is programming, then the standard industry path begins with a degree in Computer Science or equivalent, along with building small game-related stuff on your own. More details in among all those FAQ links.




#5189333 "Click to start the game"

Posted by frob on 26 October 2014 - 10:39 PM


in many games today come with a black screen with the specials words "CLICK TO PLAY", what is the utility of this phrase?

 

 

 

maybe starting right away after the game has loaded the level isn't "fair" in some games ...
Another way to prevent this is a countdown at the beginning of the level, similar to the "set, ready, go!" on races, you give some seconds for the player to be prepared.

 

Many times this is the case.  

 

Imagine a game takes a long time to load a level, perhaps a minute or longer on a low-spec machine. It is fair to put the player into a position where there is a momentary pause to allow the player to stop whatever side distraction they were doing while waiting for the loading screen.

 

In many network games there is also time required to establish all the connections and ensure everyone survived the migration from one thing to another. Such a delay allows players on slow machines a moment to load without punishing them relative to those with fast machines who may see the map or other game elements first.

 

Other times, it gives the player a few moments to acclimate themselves to the environment. In some games rather than a delay, the player-controlled character is placed in a safe "dead zone" where action does not begin until the player nudges the character into the room. The player can see what is going on, make a plan, and not need to worry about defending themselves or taking immediate action before facing the hordes. Even further, some games allow a large area where you can peek and plan and explore before triggering the big nasty whatever in the room.

 

In many of the classic arcade games like the 2D space shooters and platformers, this was accomplished by adding several seconds of time for the player to move where they want on the screen and get used to whatever background or other items are going on.  It is usually unfair of the designers to simply start the player in the middle of intense action where the player is punished for not knowing the situation in advance.




#5189197 Where do I start

Posted by frob on 26 October 2014 - 03:55 AM

I recommend you start with this FAQ from the job advice forum of the site. Lots of links to lots of reading.




#5189195 How Can I Make This Scene Look Better

Posted by frob on 26 October 2014 - 03:42 AM

Just beware since lighting in general adds emotional weight.

 

A long dark hallway with a small number of windows, their rays twinkling but also ruining your night vision, everything else in shadow, can be the epitome of a horror game.

 

A long, brightly lit hallway with cheery music and light streaming through the window might be great for little Jenny picking out her first puppy.

 

A long dark corridor with a single yellowish dangling lighbulb at the end might work well for a hideout or crappy apartment.




#5189194 Creating all scenario animations/reactionary situations?

Posted by frob on 26 October 2014 - 03:34 AM

Creating all scenario animations/reactionary situations? ... So how much would this task the CPU/GPU if we were to allow for so many different possible animation scenarios.

The CPU makes sure all the models and textures are loaded, and then does a bunch of math to figure out the matrix values and other numbers needed by the shaders.

 

The GPU takes the rigged models, runs the rig and textures and other data through the shaders with the frame-specific values, and the result is a (hopefully) beautiful picture.

 

The CPU operates on animation curves. The animation curves are usually created by animators by hand.  Sometimes they will start with motion capture data or even a DIY rotoscope the image by recording a video and using it as an overlay as they manipulate the models. 

 

While you can sometimes do a bit of work with animation blending, some for the arms and upper body and some for the legs and lower body, it often doesn't work well. The same with IK, you can use it here and there for specific tasks. Often IK is used in conjunction with an animation to provide fine tuning.  The basic animation is to extend the arm, the IK portion is to have it line up perfectly with an object's animation point. The basic animation is to take a step, the IK portion is to slightly raise or lower the foot for the terrain.

 

While you might be able to mix some animations --- you can mix a basic walking gait with arms down, with holding a cup, with turning the head, with talking --- there are many you cannot mix. Holding a cup on the top hand while simultaneously doing a summersault on the bottom. Tightly-held upper body with legs running rapidly. Throwing a spear or ball while the legs are in a resting idle position. Many actions involve the full body for proper balance and control. The animations need to reflect that.  You can build state machines and use other model and animation data to control exactly what can be used where, and transition between animation clips.

 

Animators spend a LOT of time making those animations for major games. It is enough to keep the animation team employed for the length of development. For some games that is a hand full of people for a few months, for other games it is hundreds of animators for multiple years. If it were something that could be easily solved then most major studios would do that instead and fire their animation departments. unsure.png




#5189142 Questions in my head that I can't find answers to!

Posted by frob on 25 October 2014 - 05:12 PM

While you are learning, what is important is that you are actually putting in efforts to improve weak areas, to gain knowledge in places you lack knowledge, and to gain experience in places where you don't have experience.

 


1. What is expected of me once I get accepted to join a development team? Do they
expect me to know everything about 3D and Game engines, and how the people there work together?
Or do I get to learn along the way?
 
2. What is crucial to learn to become an accepted 3D artist?

#1 When you are young and a recent hire, you will not be expected to know everything about everything.  You will work with others on your team who know what is happening, and they will give you some training on their specific tools and technologies.  You will also have an art director who reviews all your work, and probably a senior non-director who will watch over your work until you are no longer an entry level worker.

 

#2 As for what is crucial, the most obvious (and somewhat snarky) answer is to learn the things you need to know.  Learn art. Not just "I drew these things" but to take classes and read books on specific things of value. Schools are good at that because the instructors typically know the things you need to know, whereas you as the uneducated inexperienced person only know the things they are already aware of, and hence are unlikely to know what you need to know.  Many community colleges and universities have art degrees that focus on digital art. You'll learn all the classic art stuff including the very important skill of how to critique your own work, you'll also learn the important skill of learning how to learn, and hopefully the skill of learning how to work. 




#5188677 Animate in place?

Posted by frob on 22 October 2014 - 09:33 PM

Nearly all the major games I've worked on have followed the motion accumulation path.  Animate the motion and then pop to the correct location. 

 

Animators tend to understand how it works.  If they struggle, they can put a marker in the last frame a flag that effectively says "put the root bone here", which works just as well.

 

Having the animation update the root mid-animation can have lots of subtle problems, especially when blending multiple animations together.  Or, as Eck just mentioned above, if the animation and the engine motion are out of sync the character will slide or over-animate and the mismatch can be quite annoying. Better to have exactly one controller for it. Moving things in a game world can become expensive as you need to constantly update your spatial trees and navigation meshes and world boundaries.

 

Also, having lots of moving objects can make mid-animation updates very troublesome. It is much easier to have the code drop down a jig or stencil using an atomic test-and-set operation. It both tests to see if the area is clear and also tells the world "this area is in use".  The area is now reserved for the avatar to do whatever animation needs to them to do.  If the test-and-set operation fails then you know something is in the way, such as a wall or another object, and you can handle it rather than starting down the path of your animation and suddenly realizing the path is blocked.




#5188675 Managing dependencies when writing middleware

Posted by frob on 22 October 2014 - 09:24 PM

Typically you provide the source.

 

That way when an organization needs to use your stuff, and they are already using some duplicate dependencies, they can reference their existing ones.  Or if their older dependencies are out of date or are customized, they can replace as appropriate.  It also allows the organization to ensure the build settings and the linking settings and other things you may not have thought about are all build with the right settings.

 

As a common example, the vast majority of games replace the standard memory management systems with their own solutions. The generic memory management provided in the standard runtimes is able to do many things, but fails pretty badly in situations like alignment concerns and allocation pools and quick teardown (although C++11 did a little bit with the last one). If your library is relying on the standard runtime for memory allocation it can cause issues with their framework. The developer will probably be contacting you, asking for the source so they can replace it and build with their custom allocators, likely with three or four different builds each with their own settings for tracking and other support..




#5188674 Does Valve have a good working methodology?

Posted by frob on 22 October 2014 - 09:19 PM

I think you should read this: http://www.gamespot.com/articles/ex-valve-employee-blasts-the-company-for-feeling-like-high-school/1100-6411126/
 
Specially the part about the "hidden layer of management".
 
EDIT: I'm not saying that Valve's flat hierarchy is a lie or something like that, but I'm just referencing a different point of view about the whole thing, which is a nice reference.

 
I tend to agree with much of that review.
 
There is still power within the organization. The problem is that the source which historically is the most visible source of power, the hierarchical organization, is not there.
 
Going through the list, there are still people who have the information, the people who act as gatekeepers to schedules and prorities.  There are people who control the budgets. There are still people who are high performers with a well-respected track record.  There are still experts and novices on their system. There are still the popular folk.  There are still connectors and mavens. 
 
The difficulty is that without the positional framework you don't have the person you know to go to.  Under a traditional environment you can go directly to the right person if you know who that is, but if you don't know then you need to go to your boss, who can help you find the right person. 
 
In their model it can be extremely difficult to find the right person.  You might ask two or three or ten people who the expert on a specific system is, and nobody knows. Also, those same people don't know who to ask. So you're stuck with an email to everyone@thecompany asking who has experience with the tech.  If instead you had the positional hierarchy in place you could ask up a layer, they ask laterally, if necessary each of those can ask down, and then when the knowledge is found it quickly returns to you.

 

In smaller groups the idea of getting rid of the hierarchical power, and only relying on the other forms of power which are more based on your actual competence, that can work well.  But as the organization grows, she is right that the sources of power within the organization become hidden. There is a hidden mesh of information power, a hidden mesh of who controls the budgets, a hidden mesh of who hold other types of power. If for some reason you are unable to latch on an connect to a part of the meshes, you are better off leaving the organization for a place that will appreciate what you have to offer.




#5188665 Is optimization for performance bad or is optimizating too early bad?

Posted by frob on 22 October 2014 - 08:14 PM

First you measure. Then you look at your measurements and figure out what -- if anything is wrong.

That usually starts about halfway through the project. Just measuring. No changing code. Not yet. Measure early. Measure the size of static space requirements in a log that gets updated daily or in every build. Measure the size of buffers and high water marks that gets automatically updated in smoke tests or automated test. Measure performance values. Have all of them automatically generate log entries so you can track the performance over weeks and months.

After you've measured, and after you have identified a few things that are wrong, you make changes to make that specific thing better.

Then you repeat, periodically, through the end of the project.

The details of exactly what you change are very specific to what you identified.

Usually at first there are some blatantly obvious things. You'll see functions like strlen() inside deeply nested loops. You'll find items added to containers that are too small, causing a large number of reallocations and copying. You'll find lots of calls to empty virtual functions. You'll find searching implementations that take way more time than they are budgeted.

Other times you will notice things by watching the logs. Suddenly the high water mark will be 5x or even 500x higher than before, and you need to track back to where it was introduced, and why. Or you'll notice the data segment is suddenly huge, and you'll want to find out why. Or you'll see that you were following a certain growth rate and suddenly changed slope to a very rapid growth rate, and you'll want to track it back to the source. Having comprehensive metrics in regularly updated logs is very valuable.

When it is time to change things, use your profiler and other measurement tools. Measure carefully up front, then change one thing, then measure again. It is a slow process, but take the time to do it right. Depending on the difference in the results either discard it, submit it, or made additional changes and repeat. ALWAYS MEASURE because sometimes you may think your change is faster, only to discover later it makes things worse or has other negative performance side effects.

Over time the number of things you can find and fix starts to dwindle. As the clock ticks on you'll find big structural things that could be replaced, but due to the time left in the project, decide not to do it because of the risk.


As for the correlation, yes, you very often can exchange execution time for data space. Lookup tables are an example, you can pre-compute 10,000 answers, which means you have the cost of storing and looking up the data, but it can be faster to load a 160KB data table than to run big computations very frequently. Other times it is about picking a different algorithm, or looking at bugs in the implementation, or just changing the access patterns into cache-friendly formats (currently that means mostly linear, sequentially accessed, in 64 byte units).


#5188482 Heap Error

Posted by frob on 22 October 2014 - 02:07 AM


Oh Just a thought while posting this if this is called on another thread is it the same object as in the main thread.?????????

As this is For Beginners, I'm going to recommend you rip out everything thread-related in your program right now. 

 

Threading adds concurrency bugs and race conditions. If you thought this class of bugs was bad, concurrency bugs are far worse, since they are the same type of bugs compounded across a dimension of additional processes.

 

Remember when I wrote "Coming in after memory corruption errors and race conditions, I'd rank it as probably the third biggest source of the nasty evil nemesis bugs that can haunt a code base for months or even years before isolating it and finding a fix" above? 

 

What you are saying is "I'm having trouble with the third-worst source of nasty bugs.  Now I want to throw in the second-most evil source of nightmare fuel, and both of those sources can trigger the absolute worst of the three: memory corruption bugs.  STOP. DO NOT PASS GO.

 

"Threading" and "For Beginners" are a terrible mix unless you enjoy spending your days hunting seemingly random crashes rather than learning useful material.

 

I strongly recommend you remove everything related to threading in your personal project, and stick with the much simpler realm of linear programming.




#5188463 Heap Error

Posted by frob on 22 October 2014 - 12:07 AM

While those are great examples of why it happens, usually the code is much more tricky to diagnose than those three line examples. 

 

The patterns are the same but the bugs can be in very different parts of code. The allocation can be handled in one library, then the pointers go through your program, and you pass the pointers to a second library which maintains an out-of date pointer.  

 

Also, they don't necessarily happen immediately. The condition can happen ages later.  It is possible that some portion of the code was holding a pointer that was freed minutes later, or even hours or possibly days later in a long running program. 

 

"Its Complicated" is the truth. Sometimes they are very easy to find.  But sometimes they are nightmares. Coming in after memory corruption errors and race conditions, I'd rank it as probably the third biggest source of the nasty evil nemesis bugs that can haunt a code base for months or even years before isolating it and finding a fix.




#5188461 Looking for a "networking for noobs" tutorial

Posted by frob on 21 October 2014 - 11:57 PM

You might start with a visit to the networking forum's FAQ.

 

Networking is layer after layer after layer of protocols, so it is understandable to be confused at first. Networking layers range from the application on one end ("how should I format chunks of data that should my application send?") all the way through the physical layers ("how do bits fly through the air in wifi?", "how do burst of light on fiber transfer data?"). 

 

 

The FAQ links apparently haven't been cleaned out for a while so some are broken, but start with the links under FAQ entry #1 that cover a lot of the system-level side of networking. Beej's Guide (among the links in FAQ entry #1) is an amazing tutorial. He has even published it to a 150 page book. Read, learn much.

 

After you've spent a few days digesting and experimenting with the content in entry #1's links, go through FAQ entries #9 through #16 and read all those links, they will make more sense as you've gained experience. 

 

Then read the rest of the content in the FAQ for good measure, being aware that it hasn't been seriously updated in several years.




#5188459 What is bad about a cache miss since a initial cache miss is inevitable

Posted by frob on 21 October 2014 - 11:25 PM

Caching is one of those magical things that take some understanding.

 

 

* It is not so much that a single cache miss is bad. As you wrote, they sadly happen quite frequently.

* It is much more that a single cache hit is awesome. And we are used to modern compilers and processors being very awesome.

 

Due to the complex nature of the modern CPU the exact benefit of a cache hit is hard to get an exact benefit, but usually it makes the operation approximately free.  Again: having the data already loaded usually results in free instructions.  Since you can often store 4 values in a cache line but only pay the cost for a single load, buy one get three free.

 

 

Good: If your code and data is able to operate entirely in the cache, you can see enormous speed improvements. 

Bad: If your code and data doesn't play nicely with cache, you will see performance plummet since you pay full price for everything.

 

 

Let's make a little thought experiment:  Imagine data carefully and artificially designed so every single memory access was a cache miss, and needed to be fetched from main memory.  (Or just imagine the data cache was turned off.)

 

The exact time required varies based on the chipset, the motherboard, the ram chip's timings, and more. You've got those numbers on your memory chips like "9-11-11-28" or whatever, those come in to play, too. And just for fun, we'll say you've got DDR3-1600 memory since they usually take 5 nanoseconds per cycle. So if you get a trip to memory, you incur at least these costs:

 

CAS latency, roughly 12 nanoseconds. It just takes this long to go through.

Time to read and issue the command, from the memory chip above that is 5ns (from the 200MHz) * 11 cycles = 55 nanoseconds minimum.

Time to charge a row on the memory chip, again 11, so 11*5ns = 55 nanosecond minimum. 

And the final number is the cycles of overhead so the chip can basically change gears between requests. We'll hit this every time since in this scenario nothing is cached and we need to wait for each row. So 28 * 5 = 140 ns minimum.

 

Then you've got overhead for getting out of the chip, processing along the system bus, and more, but we'll just discount that right now. We'll round down and call that number 250 ns for one trip out to those DDR3 memory chips instead of the cache.

 

So by incurring a cache miss out to main memory, you just cost your process about 250 nanoseconds. 

 

And if your cache miss was something swapped out to disk, it can potentially take a LONG time to get that data, especially if the disk has spun down. On a slow spinning disk that was sleeping, the cost of your cache miss might be multiple seconds.

 

Normally that many nanoseconds is not a particularly enormous amount of time by itself, lots of things cost nanoseconds. Virtual function dispatch, for example, has a cost of roughly 10 nanoseconds. We usually don't lament the cost of a single virtual function because the cost of implementing the same functionality through other means will be at least as expensive. But with virtual functions and other costs, those nanoseconds are spent buying us functionality.  With a cache miss, we pay a cost and get nothing in return.

 

So in our contrived little program every single memory read and memory write incurs a full trip out to main memory.

 

A simple little addition function, x = a + b + c, gives three loads and a store, or 4 trips to memory, costing 1 microsecond in overhead.

 

The problem is when it adds up quickly.  If *EVERYTHING* is a cache miss out to main memory, then *EVERYTHING* gets a 250 nanosecond penalty.  

 

4000 trips to main memory mean 1 millisecond lost. That is about 10% of a graphics frame on a modern game that you just spent in overhead.

 

Roughly 40,000 trips to main memory means you just paid the cost of a full graphics frame accomplishing no productive work.

 

So in our thought experiment with a non-caching processor, if you want a fast frame rate you are limited to around 30,000 total operations per frame.  It is not because the processor is slow -- the processor can handle the work quite handily -- it is because the cost of trips to main memory dwarfs all other performance concerns.

 

Now, we'll remember that modern chips don't work like the thought experiment. Importantly, they do have three layers of cache turned on. The systems transfer larger memory blocks than a single byte over to multiple level of caches, often modern chips have L1, L2, and L3 caches. The fastest cache is tiny, and they get incrementally slower but also larger. So thankfully you need to make trips out to main memory very rarely.

 

Many times when you have a cache miss on the die it will jump out one level of cache and find the data right there, almost instantly available. That fetch serves as a warning to the cache levels outside it to prefetch the following blocks of memory. So even then a cache miss is not 250ns it may be only 30ns or 80ns or 100ns or whatever because it is already in the L1 cache, or the L2 cache, or the L3 cache, and not sitting out in main memory. 

 

 

One of the biggest differences between Intel's i3/i5/i7 and similar chipsets, the thing that far and away is the biggest factor in their compute-bound performance, is the cache size. On the current Haswell chipset the cache sizes are 2 MB on the lowly Celeron, 3MB on most of the i3, either 4 or 6 MB on the i5, 8MB on the i7, and 20MB cache on the i7 Extreme.  On the server lineup (e3, e5, and e7) they range from 8MB to 45MB of cache. Cache is the way the chip gets fed. It doesn't matter how fast the processor is if the processor is sitting around waiting for data to compute with. A bigger faster cache is usually the most critical performance enhancer for compute-intensive systems. 

 

The secondary feature that interacts with caching is hyperthreading. Hyperthreading (as explained in the article lined above) basically just has the same processor inside and bolts on a second instruction decoder. It lets you run two processes on the same internal core. That way when one process is stalled by a cache miss or other performance hit, one process can stall waiting for data while the other process continues to run with data in the cache.  Of course, if both processes are compute-intensive they both end up fighting for cache space.  If you are not compute-bound hyperthreading can give nice speedups. When you are already fighting over cache space in compute-bound threads, hyperthreading doesn't help matters.  That is why the second biggest difference between the different Intel processors is hyperthreading, which is basically just taking advantage of properties of the cache.

 

 

Hopefully that will give a bit deeper understanding about CPU caches. 

 

TL;DR:  Individual cache misses are inevitable but every cache miss has a cost with no benefit. Better cache use and bigger caches both provide better performance for compute-intensive tasks.




#5188398 Is there a way for me to see my game`s performance or frame rate?

Posted by frob on 21 October 2014 - 03:25 PM


Some engines calculate the framerate from the frametime (framerate = 1000 / frametime_in_milliseconds, or framerate = 1 / frametime_in_seconds). This is cool but I don't like the flickering framerate as numbers change every single frame.

 

I love a one-second interval update showing three values with actual time spent:  min / avg / max

 

For example, you might have these microseconds:    8413 us / 9341 us / 9642 us

 

Seeing a spread of times gives a lot more insight.  For some made up illustrative examples:

 

8413 us / 9341 us / 9642 us  <-- Comfortably 100 frames per second, very consistent times each frame.

1534 us / 9458 us / 27314 us <-- Roughly 100 frames per second, but erratic timing with occasionally very slow frames.

0 us / 16000 us / 875345 us   <-- Exactly 60 frames per second, but still a slideshow since 59 frames are instant and one frame is a full second. Fire everyone.

 

Giving both the min and max let you see much more than just the average. They help identify bottlenecks and dropped frames that may not be as obvious with just an average alone.






PARTNERS