Jump to content

  • Log In with Google      Sign In   
  • Create Account

Hodgman

Member Since 14 Feb 2007
Offline Last Active Today, 03:28 PM

#5185657 How to prepare myself to get a job in a AAA game company?

Posted by Hodgman on 07 October 2014 - 06:56 PM

Candidly, game development is not a 40 hour a week job and there's a lot of people willing to put the hours in out there. Some companies might pay lip service to "work life balance" but we all know it's not true.

I would say that more studios are figuring out that overtime is harmful to a project. When I worked at a 400 staff developer, we never did overtime (38 hours a week, every week). Recently at a <50 staff developer, when the management asked people to start doing 50hr weeks instead of 38hrs, the lead programmer resigned on the spot, followed by a lot of other senior staff in the following weeks. The backlash almost killed the studio.
When working for larger companies, you really do have a choice as to whether you'll put up with that sort of abuse or not. Just say no, kids!

However, yeah, as a beginner trying to get their foot in the door, you have no leverage at all and aren't in demand, so you'll probably want to choose to put up with these kinds of companies just to get the experience...


#5185638 Lines of code language comparisons

Posted by Hodgman on 07 October 2014 - 04:33 PM

But it can't possibly be a completely ignorable factor, seeing as more code clearly means more time spent coding

No, that isn't clearly true.
How is that not clearly true?
More code equals more time typing.

If I'm regurgitating code flat-out, I could probably write 10000 LOC a day... Which means that Braid is just two weeks work, right?
Obviously, that's not how things work. Jon Blow did not just sit down and churn out 100k loc non-stop and then have a game at the end of it.

Your average professional programmer is more likely to produce about 100 LOC per day, which indicates that raw typing is only about 1% of their actual job.
In any case of trying to optimize a process, you don't go after the parts of it that only take up 1% of the time first.

For any particular task, a productive language doesn't necessarily result in less lines, it results in less careful focused thinking required to write and read the code.
The majority of a programmer's job is actually reading other people's code, and then modifying code. Writing new code from scratch comes way after those two activities.

For an extreme example, compare one line of C# LINQ to the equivalent x86 Assembly.
In the former, you can have one very readable line that sums all the numbers in a linked list. In the latter, you've got pages of what may as well be hieroglyphics. Even someone who's an expert at assembly will take minutes to pour over those hieroglyphics and piece together their purpose / meaning. If you then had to modify the ASM, it would be a complex task requiring expert skills and a lot of time.
It would be much more productive if you could instead just modify some high level C# code.

To avoid jumping to conclusions here though, let's say you've got another task that requires carefully specifying the byte-by-byte memory layout of a complex, compressed data-structure of some kind. You actually care about the exact placement o your data in RAM/address-space here.
For this task, Lua is just out of the question (doesn't offer that capability, without modding the Lua VM at least). C# has the capability, but the code becomes extremely verbose and ugly. C has the capability by default and the code is simpler.
Jon Blow's game-state replay system is a good example of one of these complex "low-level" tasks, where a lower-level language like C ends up being a better fit than a higher level one like C#.

In both those examples, it's not the LOC count that makes a difference, it's -
• How readable is it? How long does it take someone to understand the workings of the algorithm embodied in the code?
• How fragile is it? How likely is it that a bug will be introduced when someone modifies the code (probably due to them failing to read an important detail)?
• How flexible is it? Can you edit the algorithm later without having to rewrite everything from scratch?
• How correct is it? Can a peer review prove formal correctness? Are there assertions for every assumption made by the coder, to detect any bugs that may occur later? If it is identified as incorrect/buggy later, how hard will it be to diagnose potential issues?


#5185197 Texture Storage + Texture Views AND mipmapping

Posted by Hodgman on 05 October 2014 - 04:57 PM

They both look mipmapped to me... except that in the 'texstorage' screenshot, it looks like the mipmaps have been generated using a worse algorithm than in the other screenshots.

 

Do you generate the mipmaps yourself, or do you just ask the GL driver to create them for you?

[edit]To answer my own question -- you're using glGenerateMipmap in your code snippet, so the driver is creating them for you biggrin.png

In that case, it just looks like that your drivers suck at creating mipmaps in some cases?

 

There's no special mipmap generation hardware in modern GPUs -- when you ask the driver to create them for you, you're actually just asking it to load up a built in shader program and dispatch a bunch of compute tasks, or draw a bunch of quads, filling in your mips with appropriate data.

IMHO, this is a really bad idea because every driver might use a different algorithm, landing you in situations like this... I'd prefer to just ship your own compute/fragment shaders for generating mips, or even better, to precompute them ahead of time using a very high quality algorithm and load them from disk.




#5185195 Lines of code language comparisons

Posted by Hodgman on 05 October 2014 - 04:47 PM


Both have the exact same writing speed, and the only difference is the language that they're writing in and perhaps the estimated amount of writing errors. Whatever IDE they're using, its strengths and weaknesses is ignored for now (because it depends on what IDEs you're using).

Who completes Braid first?

Lines of code per hour is in no way a useful number.

I've never been constrained in my time to complete a task by my typing speed!

 

There's many languages that use less lines of code (or characters) than your average bit of C code... but they also possibly require more thought per line...

 

Yes, different languages will be more/less productive in different situations, but simply comparing LOC counts misses all the important nuance of a true comparison.

 

At my last console development job, we used both Lua and C++.

Lua was used because for high level gameplay code, it seems to be more productive -- i.e. less careful thought is required (and yes: less lines of code) to implement a task.

However, this isn't always true; For some more complex systems, it can actually be more productive (i.e. you get the task done faster) when using C++...

Then there's parts of the game where Lua just isn't a valid choice, so C++ would win by default...

Also interestingly, when refactoring code late in the project, it was much easier to refactor the C++ systems due to the strong typing and the fact that C++ IDE's are very smart/useful. Refactoring the Lua code was extremely time consuming, and a lot of bugs were accidentally introduced in the processes, which wasted more time QA'ing, reproducing and fixing!




#5184897 Is 3D game development easier with an engine?

Posted by Hodgman on 04 October 2014 - 12:46 AM

Their sole purpose is to make development of 3D games easier.

 

An analogy - "Is building a car from existing parts really easier than forging all the parts from raw materials yourself?"




#5184715 Best Game Engine for indie game, continued ;)

Posted by Hodgman on 03 October 2014 - 12:41 AM

Yeah I would recommend UE4 or Unity to anyone looking to get started now. Unity is incredibly popular (and for good reason), and UE4 is incredibly cheap for a AAA engine with full C++ source code...

 

Unfortunately I'm one of those people who's stupid enough to make their own...

 

[edit] The Esenthel and C4 developers are members of this forum, so easy to get in touch with them if you have any questions on their products cool.png




#5184667 Funniest line of code ever ?

Posted by Hodgman on 02 October 2014 - 05:46 PM

I'd prefer it to be data-driven, so in human readable data files there's nice names like forward/backward and sets like idle/run, and states and transitions and events and behaviors linking them all together...
And then in the code there's a few really simple loops over arrays instead of all those hard-coded special cases :P ;)


But then we're back to square one -- the machine generated XML/whathaveyou will probably have formatting that will drive an OCD worker crazy, who posts it here as a coding horror :D


#5184555 Threading question

Posted by Hodgman on 02 October 2014 - 06:51 AM

Having a graph of scene/game objects with individual locks on them is bad juju. It leads to a completely non-deterministic game loop where individual systems are operating data from this frame and the previous frame, as they all fight for locks and update your spaghetti mess of data at random intervals. Atomic members is even worse, as class invariants that affect more than one member can no longer be enforced, and individual objects can now be in a half updated/half stale state.

You shouldn't use locks or atomic variables, except in the core guts of the threading system of your engine where absolutely needed.

Most engines are based instead around data-flows.
e.g. Take some code:
A = Task()
B = Task2(A)
C = Task3(A)
D = Task4(B,C)
you can look at the input regions and output regions of these tasks, and their dependencies between each other.
Task1 has no dependencies, and treats data block A as an output. Task2 depends on Task1's output as it's input. Task4 depends on the outputs of Task2 and Task3.
Once the dependencies are known, you can arrange the tasks into a Directed Acyclic Graph, and can use a topological sort to generate a linear schedule that the tasks can be executed in.
Each task might be made up of a list of parallel jobs.
In this example, all engine threads can execute the list of Task1 jobs, then sync to ensure the whole list is completed, then they can execute all of the Task2 and Task3 jobs, then sync again, then execute the Task4 jobs.
No locking. No atomic members in high level code. Just scheduling of execution.
The high level game code remains completely deterministic. You scale across any number of CPU cores. There's no deadlocks or race conditions possible. It's more efficient due to a tiny number of sync points / atomic memory accesses (compared to one in every lock).
There's a reason that job pools have become the defacto standard way to scale across multiple cores in game engines. If you're touching locks, you're doing it wrong! :D


#5184548 Multithreading issues: Unit test fails sometimes

Posted by Hodgman on 02 October 2014 - 06:30 AM

Apparently poll returns "The number of handlers that were executed" - you should probably assert that the return value is 1000 as part of your test.

Looks like there's a race condition where the same entity is updated simultaneously by different threads. That could cause some updates to be lost. The fact that you're always witnessing 'x' updates be lost is probably down to the horribly random nature of MT bugs, combine with your hardware.


#5184477 Default light ambient, diffuse and specular values?

Posted by Hodgman on 01 October 2014 - 10:11 PM

If I do that, then I need to code the color in the diffuse channel, right? As for now, I specify an objet's color via the ambient component (of the object). Its diffuse component is always grey, i.e. it adds some diffuse effect, but not in the color of the object.
 
Would that mean I could drop the object's ambient channel, and only work with the object's diffuse and specular channels?

Physically speaking, diffuse and ambient material colours are the same thing; it doesn't make sense for them to be different. Diffuse is the colour that the object scatters when directly lit, and ambient is the colour it scatters when indirectly lit. An object can't know whether light that's hitting it has arrived directly from a light source, or has indirectly arrived after bouncing arround, so it's impossible for these two values to be different.
I'd recommend getting rid of the material ambient colour, and simply using the material diffuse colour if/when using "ambient lights".
 
On the same topic, for non-metal materials, the specular colour should almost always be a greyscale value, as reflections off these materials are not discoloured.
However, metal materials should have their main colour in the specular component, and should be very dark (or black for pure metals) in the diffuse/ambient component, because refracted light is absorbed into metals as heat, instead of being diffused/re-emitted.




#5184444 Bad code or usefull ...

Posted by Hodgman on 01 October 2014 - 06:12 PM

Thx for recap I kinda did but refuse to accept that this is good code unless you perform bounds checking in release code and then you get the performance loss. (quote below is good reply as to why this is acceptable in game libs)

The thing is, what do you do in the release build if you do detect an out-of-bounds error? The course of action on detecting this error is to fix the code... At that point, you know the code is wrong... but you've shipped the code and the user is running it!
e.g. if this were C#, a nice safe language, then the game would still just crash with an unhandled out of bounds exception.

All you can really do is generate a nice crash dump and ask the user to send it to you so you can develop a patch.
If you choose not to crash, then your game is then running in an invalid state from that point onward. You can try to stop the OS from generating access violations by returning zero (instead of reading/writing out of bounds)... but that may or may not completely break your game code. Maybe you end up getting the player stuck somewhere in a level, and a hard crash would've been preferable. Whatever you do at this point, you're screwed!
 
In this situation (where you are getting out of bounds errors), the broken code isn't inside the vector, it's in the user of the vector that's broken it's contract. The vector's contract states that the index must be 0/1/2, so it's up to the client code to ensure this is true.
If you want to abuse the type system to push this responsibility up to the client, you could make your operator[] take an enum rather than an int. If the client is doing integer-based access, they're forced to cast their int to the enum type beforehand. In a code review, this int->enum cast highlights the fact that you're going from a huge range of bilions of possible integers down to a finite set of enum values -- so if there's a lack of checking during this task, the client code should fail it's code review. All the way up the chain to the source of this integer data, you need to be able to prove that you'll only generate integer values that are within the enum set.

namespace Axis3d { enum Type { X=0, Y, Z, Count, First=X, Last=Z }; }
...
T m_data[Axis3d::Count];
T& operator[]( Axis3d::Type t ) { assert(t>=Axis3d::X && t<Axis3d::Count); return m_data[t]; }

e.g. this code is provably valid:

static_assert( Axis3d::X == 0 && Axis3d::Y == 1 && Axis3d::Z == 2 );
for( int i=0; i!=3; ++i )
  total += vec[(Axis3d::Type)i];
//or
for( int i=Axis3d::First; i!=Axis3d::Last; ++i )
  total += vec[(Axis3d::Type)i];

 
FWIW, one of the standard ways that we QA a game before release is to modify the memory allocator so that every allocation is rounded up to the size of a page - in different runs the actual allocation is either placed right at the start, or the end of the page. You then also allocate an extra 2 guard pages, one before and one after the allocation, and set these pages to have no permissions. If the game then reads/writes past the boundaries of an allocation, the OS immediately generates an access violation.
This greatly increases the amount of address space required by your game, so it pretty much requires you to do a 64bit build... but it's an amazing last line of defense against memory access bugs.
 

 

There's often more, such as Retail with logging+assertions enabled for QA, or Dev minus logging+assertions for profiling, etc...
Point is that every frame there are thousands of error checks done to ensure the code is valid. Every assumption is tested, thouroughy, and repeatedly by every dev working on the game and every QA person trying to break it.

And that's why video games don't have bugs or crashes any more.

 

When was the last time you crashed a console game? If Microsoft or Sony can reproduce a crash in your game, then your disks don't get printed and you can't ship! It does still happen, but it's pretty rare.
Often games ship knowing that there are a lot of gameplay bugs, because these won't necessarily cause MS/Sony to veto your production run, and you simply don't have the time/money to fix them before the publisher's release date. A lot of that comes down to business decisions, not engineering ones sad.png
Most publishers would rather ship on the day they decided a year earlier and release a downloadable patch later, rather than delay by a month.
 
On the other side of the coin, most developers get paid a flat fee no matter how well the game performs, or whether it's going to take longer to create or not. You might negotiate to get $100k per month for 12 months, which is barely enough to cover your development expenses. If it comes to the 13th month of development and you've still got 6 weeks worth of bugs to fix, you're not getting paid for that work... You'll just be bleeding money, trying to get the publisher off your back as soon as you can to avoid bankruptcy.
 

Don't even get me started on Skyrim.

TES games have a terrible amount of bugs because (edit: it seems that from an arrogant, ignorant outsider perspective) they hire non-programmers to write code, so they obviously don't give a shit about decent Software Engineering practices.

(Sorry to everyone at Bethesda. I love your games, but there are so many "script" bugs on every release. I hope practices are improving with every iteration!)

Plus their games are rridiculously massive, meaning lots of work in a short time-frame, plus hard to test everything, plus AAA publisher having release dates set in stone...

Exactly why fix when you can prevent the bugs

Your bounds checking suggestion doesn't prevent any bugs. It only detects bugs.
 
 
If you want some more examples of terribly unsafe game engine code, check out this blob/types.h file.
This file is part of a system for storing complex data structures on disc, including pointers -- implemented as byte offsets instead of memory addresses. This allows the data structures to be loaded from disc and into the game without any need for a deserialization step.
e.g.

//Offset<T> ~= T*
//List<T> ~= std::array<T>
//StringOffset ~= std::string*
struct Person { StringOffset name; int age; };
struct Class { int code; StringOffset name; List<Offset<Person>> students; };
struct School { List<Class> classes; };
 
u8* data = LoadFile("school.blob");
School& school = *(School*)data;
for( int i=0, iend=school.classes.Count(); i!=iend; ++i )
{
  Class& c = school.classes[i];
  printf( "Class %d: %s\n", c.code, &c.name->chars );
  for( Offset<Person>* j=c.students.Begin(), *jend=c.studends.End(); j!=jend; ++j )
  { 
    Person& p = **j;
    printf( "\t%s - %d\n", &p.name->chars, p.age );
  }
}

If the content in that data file is invalid, all bets are off. There's no validation code at runtime; it's just assumed that the data files are valid.
If the data file compilers are provably valid, then there's no need for extra validation at runtime.




#5184320 Default light ambient, diffuse and specular values?

Posted by Hodgman on 01 October 2014 - 06:43 AM

There's no answer to this, it's a complete hack, for artists to play with.

It makes no physical sense to begin with. Ambient value on a light source is the amount that the light affects every object everywhere from every direction - Jesus photons. The diffuse/specular light scales say how bright the light is for refractions/reflections respectively; you can emit a photon, wait to see if it's firs event is a refraction (diffuse) or a reflection (specular) and then change the intensity of your light source after the fact.

If you're trying to emulate another program that uses this completely fictional lighting model, then the answer is - the same values that the artist was using in that program.
If you don't know, my advice would be for per light ambient to be very low or zero, and per light diffuse/specular to be equal.

If you're not trying to emulate another program, then you're free to choose a more sensible lighting model. In such cases, art is typically made specifically for a particular game, and artists will preview there work within that game to tweak the light/material values appropriately.

If you just want to see the shapes of models clearly, I'd try the also-completely-fake half-Lamber diffuse model, with 2 light sources of contrasting colours - e.g. A pink and a teal directional light comin from top-left and top-right. The half-Lamber model ensures the gradients wrap all the way to the back, avoiding the flat look that plain ambient gives you.

Moser physically based lighting models do not have seperately ambient/diffuse/specular ratios per light because as above, it's nonsense; they just have a single colour/intensity per light, and then the interesting ratios are part of the materials.


#5184318 DirectX to OpenGL matrix issue

Posted by Hodgman on 01 October 2014 - 06:30 AM

Off topic from your actual problem, but there's one slight difference in D3D/GL matrices - D3D's final NDC z coords range from 0 to 1, and GL's from -1 to 1.
So without any changes, your game will end up wasting 50% of your depth buffer precision in the GL version. To fix this, you just need to modify the projection matrix to scale in z by 2 and offset by -1 so you're projecting into the full z range.

If you're using a GL-oriented math library, it will construct it's projection matrices like this by default, so you'd have to make the opposite scale/bias z adjustments to get D3D to work (the error would be a misbehaving near-clip plane, appearing too far out).


#5184308 Bad code or usefull ...

Posted by Hodgman on 01 October 2014 - 06:14 AM

I learned this trick from Washu ... That is a bit verbose and tricky to read, but other than that I think it has all the nice features one would want, including being correct according to the C++ standard.

Thats really cute, but I'd hate to see the code that's produced in the hypothetical situation where the original implementation-defined approach isn't valid and/or the optimizer doesn't realize the equivalence...

I would however add a static assert to be immediately notified about the need for intervention, probably something like that: static_assert(sizeof(Vec3) == 3 * sizeof(float), "unexpected packing");

Definately. When you're making assumptions, they need to be documented with assertions. The OP's snippet is implementation-defined, so you're making assumptions about your compiler. You need to statically assert that &y==&x+1 && &z==&x+2 (or the sizeof one above is probably good enough), and at runtime assert that index>=0 && index<3.


#5184304 Bad code or usefull ...

Posted by Hodgman on 01 October 2014 - 06:07 AM

On the dozen-ish console games that I've worked on, they've all gone beyond the simple Debug/Release dichotomy.
The most common extension is-
• Debug -- all validation options, no optimization. Probably does not run at target framerate. Extra developer features such as a TCP/IP file-system, memory allocation tracking, intrusive profiler, reloading of assets/code/etc.
• Release/Dev -- as above, but optimized. Almost runs at target framerate.
• Shipping/Retail -- all error handling for logic errors is stripped, all developer features stripped, fully optimized (LTCG/etc). Runs at target framerate or above.

There's often more, such as Retail with logging+assertions enabled for QA, or Dev minus logging+assertions for profiling, etc...
Point is that every frame there are thousands of error checks done to ensure the code is valid. Every assumption is tested, thouroughy, and repeatedly by every dev working on the game and every QA person trying to break it.

When it comes to the gold master build, you KNOW that all of those assertions are no longer necessary, so it's fine to strip them. Furthermore, they're just error-detection, not error-handling, so there's no real use in shipping them anyway 8-)




PARTNERS