Jump to content

  • Log In with Google      Sign In   
  • Create Account

Hodgman

Member Since 14 Feb 2007
Offline Last Active Private

#5184444 Bad code or usefull ...

Posted by Hodgman on 01 October 2014 - 06:12 PM

Thx for recap I kinda did but refuse to accept that this is good code unless you perform bounds checking in release code and then you get the performance loss. (quote below is good reply as to why this is acceptable in game libs)

The thing is, what do you do in the release build if you do detect an out-of-bounds error? The course of action on detecting this error is to fix the code... At that point, you know the code is wrong... but you've shipped the code and the user is running it!
e.g. if this were C#, a nice safe language, then the game would still just crash with an unhandled out of bounds exception.

All you can really do is generate a nice crash dump and ask the user to send it to you so you can develop a patch.
If you choose not to crash, then your game is then running in an invalid state from that point onward. You can try to stop the OS from generating access violations by returning zero (instead of reading/writing out of bounds)... but that may or may not completely break your game code. Maybe you end up getting the player stuck somewhere in a level, and a hard crash would've been preferable. Whatever you do at this point, you're screwed!
 
In this situation (where you are getting out of bounds errors), the broken code isn't inside the vector, it's in the user of the vector that's broken it's contract. The vector's contract states that the index must be 0/1/2, so it's up to the client code to ensure this is true.
If you want to abuse the type system to push this responsibility up to the client, you could make your operator[] take an enum rather than an int. If the client is doing integer-based access, they're forced to cast their int to the enum type beforehand. In a code review, this int->enum cast highlights the fact that you're going from a huge range of bilions of possible integers down to a finite set of enum values -- so if there's a lack of checking during this task, the client code should fail it's code review. All the way up the chain to the source of this integer data, you need to be able to prove that you'll only generate integer values that are within the enum set.

namespace Axis3d { enum Type { X=0, Y, Z, Count, First=X, Last=Z }; }
...
T m_data[Axis3d::Count];
T& operator[]( Axis3d::Type t ) { assert(t>=Axis3d::X && t<Axis3d::Count); return m_data[t]; }

e.g. this code is provably valid:

static_assert( Axis3d::X == 0 && Axis3d::Y == 1 && Axis3d::Z == 2 );
for( int i=0; i!=3; ++i )
  total += vec[(Axis3d::Type)i];
//or
for( int i=Axis3d::First; i!=Axis3d::Last; ++i )
  total += vec[(Axis3d::Type)i];

 
FWIW, one of the standard ways that we QA a game before release is to modify the memory allocator so that every allocation is rounded up to the size of a page - in different runs the actual allocation is either placed right at the start, or the end of the page. You then also allocate an extra 2 guard pages, one before and one after the allocation, and set these pages to have no permissions. If the game then reads/writes past the boundaries of an allocation, the OS immediately generates an access violation.
This greatly increases the amount of address space required by your game, so it pretty much requires you to do a 64bit build... but it's an amazing last line of defense against memory access bugs.
 

 

There's often more, such as Retail with logging+assertions enabled for QA, or Dev minus logging+assertions for profiling, etc...
Point is that every frame there are thousands of error checks done to ensure the code is valid. Every assumption is tested, thouroughy, and repeatedly by every dev working on the game and every QA person trying to break it.

And that's why video games don't have bugs or crashes any more.

 

When was the last time you crashed a console game? If Microsoft or Sony can reproduce a crash in your game, then your disks don't get printed and you can't ship! It does still happen, but it's pretty rare.
Often games ship knowing that there are a lot of gameplay bugs, because these won't necessarily cause MS/Sony to veto your production run, and you simply don't have the time/money to fix them before the publisher's release date. A lot of that comes down to business decisions, not engineering ones sad.png
Most publishers would rather ship on the day they decided a year earlier and release a downloadable patch later, rather than delay by a month.
 
On the other side of the coin, most developers get paid a flat fee no matter how well the game performs, or whether it's going to take longer to create or not. You might negotiate to get $100k per month for 12 months, which is barely enough to cover your development expenses. If it comes to the 13th month of development and you've still got 6 weeks worth of bugs to fix, you're not getting paid for that work... You'll just be bleeding money, trying to get the publisher off your back as soon as you can to avoid bankruptcy.
 

Don't even get me started on Skyrim.

TES games have a terrible amount of bugs because (edit: it seems that from an arrogant, ignorant outsider perspective) they hire non-programmers to write code, so they obviously don't give a shit about decent Software Engineering practices.

(Sorry to everyone at Bethesda. I love your games, but there are so many "script" bugs on every release. I hope practices are improving with every iteration!)

Plus their games are rridiculously massive, meaning lots of work in a short time-frame, plus hard to test everything, plus AAA publisher having release dates set in stone...

Exactly why fix when you can prevent the bugs

Your bounds checking suggestion doesn't prevent any bugs. It only detects bugs.
 
 
If you want some more examples of terribly unsafe game engine code, check out this blob/types.h file.
This file is part of a system for storing complex data structures on disc, including pointers -- implemented as byte offsets instead of memory addresses. This allows the data structures to be loaded from disc and into the game without any need for a deserialization step.
e.g.

//Offset<T> ~= T*
//List<T> ~= std::array<T>
//StringOffset ~= std::string*
struct Person { StringOffset name; int age; };
struct Class { int code; StringOffset name; List<Offset<Person>> students; };
struct School { List<Class> classes; };
 
u8* data = LoadFile("school.blob");
School& school = *(School*)data;
for( int i=0, iend=school.classes.Count(); i!=iend; ++i )
{
  Class& c = school.classes[i];
  printf( "Class %d: %s\n", c.code, &c.name->chars );
  for( Offset<Person>* j=c.students.Begin(), *jend=c.studends.End(); j!=jend; ++j )
  { 
    Person& p = **j;
    printf( "\t%s - %d\n", &p.name->chars, p.age );
  }
}

If the content in that data file is invalid, all bets are off. There's no validation code at runtime; it's just assumed that the data files are valid.
If the data file compilers are provably valid, then there's no need for extra validation at runtime.




#5184320 Default light ambient, diffuse and specular values?

Posted by Hodgman on 01 October 2014 - 06:43 AM

There's no answer to this, it's a complete hack, for artists to play with.

It makes no physical sense to begin with. Ambient value on a light source is the amount that the light affects every object everywhere from every direction - Jesus photons. The diffuse/specular light scales say how bright the light is for refractions/reflections respectively; you can emit a photon, wait to see if it's firs event is a refraction (diffuse) or a reflection (specular) and then change the intensity of your light source after the fact.

If you're trying to emulate another program that uses this completely fictional lighting model, then the answer is - the same values that the artist was using in that program.
If you don't know, my advice would be for per light ambient to be very low or zero, and per light diffuse/specular to be equal.

If you're not trying to emulate another program, then you're free to choose a more sensible lighting model. In such cases, art is typically made specifically for a particular game, and artists will preview there work within that game to tweak the light/material values appropriately.

If you just want to see the shapes of models clearly, I'd try the also-completely-fake half-Lamber diffuse model, with 2 light sources of contrasting colours - e.g. A pink and a teal directional light comin from top-left and top-right. The half-Lamber model ensures the gradients wrap all the way to the back, avoiding the flat look that plain ambient gives you.

Moser physically based lighting models do not have seperately ambient/diffuse/specular ratios per light because as above, it's nonsense; they just have a single colour/intensity per light, and then the interesting ratios are part of the materials.


#5184318 DirectX to OpenGL matrix issue

Posted by Hodgman on 01 October 2014 - 06:30 AM

Off topic from your actual problem, but there's one slight difference in D3D/GL matrices - D3D's final NDC z coords range from 0 to 1, and GL's from -1 to 1.
So without any changes, your game will end up wasting 50% of your depth buffer precision in the GL version. To fix this, you just need to modify the projection matrix to scale in z by 2 and offset by -1 so you're projecting into the full z range.

If you're using a GL-oriented math library, it will construct it's projection matrices like this by default, so you'd have to make the opposite scale/bias z adjustments to get D3D to work (the error would be a misbehaving near-clip plane, appearing too far out).


#5184308 Bad code or usefull ...

Posted by Hodgman on 01 October 2014 - 06:14 AM

I learned this trick from Washu ... That is a bit verbose and tricky to read, but other than that I think it has all the nice features one would want, including being correct according to the C++ standard.

Thats really cute, but I'd hate to see the code that's produced in the hypothetical situation where the original implementation-defined approach isn't valid and/or the optimizer doesn't realize the equivalence...

I would however add a static assert to be immediately notified about the need for intervention, probably something like that: static_assert(sizeof(Vec3) == 3 * sizeof(float), "unexpected packing");

Definately. When you're making assumptions, they need to be documented with assertions. The OP's snippet is implementation-defined, so you're making assumptions about your compiler. You need to statically assert that &y==&x+1 && &z==&x+2 (or the sizeof one above is probably good enough), and at runtime assert that index>=0 && index<3.


#5184304 Bad code or usefull ...

Posted by Hodgman on 01 October 2014 - 06:07 AM

On the dozen-ish console games that I've worked on, they've all gone beyond the simple Debug/Release dichotomy.
The most common extension is-
• Debug -- all validation options, no optimization. Probably does not run at target framerate. Extra developer features such as a TCP/IP file-system, memory allocation tracking, intrusive profiler, reloading of assets/code/etc.
• Release/Dev -- as above, but optimized. Almost runs at target framerate.
• Shipping/Retail -- all error handling for logic errors is stripped, all developer features stripped, fully optimized (LTCG/etc). Runs at target framerate or above.

There's often more, such as Retail with logging+assertions enabled for QA, or Dev minus logging+assertions for profiling, etc...
Point is that every frame there are thousands of error checks done to ensure the code is valid. Every assumption is tested, thouroughy, and repeatedly by every dev working on the game and every QA person trying to break it.

When it comes to the gold master build, you KNOW that all of those assertions are no longer necessary, so it's fine to strip them. Furthermore, they're just error-detection, not error-handling, so there's no real use in shipping them anyway 8-)


#5184045 Good reading material on different BDRF's

Posted by Hodgman on 30 September 2014 - 07:04 AM

Maybe - http://digibug.ugr.es/bitstream/10481/19751/1/rmontes_LSI-2012-001TR.pdf




#5183962 Should I wait for OpenGL NG?

Posted by Hodgman on 29 September 2014 - 10:51 PM

If that's the case though I'm not sure, should I necessarily start with the current OpenGL?. What would be a great "baby's first modern graphics api"?

I'd personally recommend D3D11 over GL (and "practical rendering and computation with direct3d 11" as a reference book)... but if you already have the GL super-bible then you may as well give it a shot.
 

This is different than the 3.0 stuff which had legacy support. I believe the new api will represent a clean break from the current OpenGL, and they appear to have the major companies and hardware support behind it.

GL 3 (aka GL Longs Peak) was supposed to be a clean break from OpenGL, throwing out backwards compatibility and a decade of cruft so that the new API would correctly map to GPUs of the time.... but they did a backflip late in the process and released GL 3 instead...
So going by history, GL NG might still end up being cancelled and just be released as GL 5.x sad.png

 

It's unlikely for them to backflip again, as D3D12 / Mantle are already going down this same path (clean break, new API based around modern GPU concepts, minimal abstraction) creating a good bit of pressure on Khronos to actually succeed this time.




#5183928 Legal use of fonts

Posted by Hodgman on 29 September 2014 - 06:51 PM

Nope it is not legal:
 

And you can use them in every way you want, privately or commercially — in print, on your computer, or in your websites.

 

That text is not the license, it's a human readable explanation, describing the license.

When you review one of the fonts on the google fonts page, at the bottom you'll see something like:
3 Styles by Vernon Adams
SIL Open Font License, 1.1

 

Clicking on the link will show you the actual license. If the font you want to use is under the OFL linked above, then you should be fine.

If you view your bitmap font as a piece of artwork, then there's no restrictions on you at all.

If you view your bitmap font as a derivative, then it itself becomes licensed under the OFL. You can still package/embed it into your game, but you must include the OFL text somewhere alongside it (in your readme or a link to it in your about screen). You can sell your game, encrypt your files, use DRM, etc... however, if someone manages to extract your bitmap font, they're free to reuse it themselves according to the OFL license.

 

So basically:

  • Stick to OFL licensed fonts (or other nice open-source licenses).
  • Include the OFL text (or a link to it) along with your games files, mentioning which fonts are covered by it.
  • If someone 'steals' the font out of your game, they're free to use it too.
  • If you make modifications to the font, it is still OFL-licensed.
  • Don't plagiarize and pretend you're the original author.



#5183912 Should I wait for OpenGL NG?

Posted by Hodgman on 29 September 2014 - 05:15 PM

GLNG is a complete unknown at this point; it could be out next year, or in five years.
If you're ok with attaching your project to that kind of waiting game, then sure, wait...

You say you don't know GL - do you know any other graphics APIs?
Graphics APIs are a lot like programming languages - learning your first is hard, but learning new ones after that is easy.
If you haven't learned one before, then jump into GL or D3D11 now, so that when GLNG actually exists you'll be able to pick it up quickly.


#5183682 New trends in gaming: voxels are the key to real-time raytracing in near-term...

Posted by Hodgman on 29 September 2014 - 02:26 AM

Also, every year or so someone will bring up a technology where the company, such as Euclideon, keeps claiming it will allow "infinite detail" or "unlimited detail" using voxel-based rendering.
 
These are basically precomputed, voxel-based octrees that were all the rage in the 1970s. Storage speed and transmission speed have both increased, but still it is only mildly useful in games.  There were many different algorithms in the '70s and '80s over it. Marching Cubes, a moderately efficient voxel isosurface algorithm, was released and patented in 1987. The patent hurt the research field rather painfully until it expired in 2005.

 
laugh.png  And in today's news...
Holy shit, scanning a religious place, those guys might have got a new logo but WTF their marketing team is missing the basics.

As a side note there's a company near here doing more or less the same thing... except they will give you pretty nice meshes and tons of metadata on request.
Wut?
Who done what?

The "Unlimited Idiocy" people uploaded a new video, with the same condescending & misleading voice-over from their CEO, so it's time for the idiotic hype train to arrive again.

I'll C&P my response to that article from FB:
They've decided to target the GIS industry, where their tech actually makes sense, after over a decade of failure as a games middleware company. Not looking forward to all the red-herring cries of *HYPE* and "FAKE!" that flood the Internets whenever these snake-oil salesmen poke into the gaming world... Those are red herrings because yes their tech is legit, but no it's not actually that useful for most people. If it was, they wouldn't have failed to sell it to gamedevs for all these years.

If your art generation pipeline is based around laser scanning, your geometry is completely static, you're not already making use of your CPU for gameplay code or whatever, you don't care about using the GPU for rendering (maybe you moved your gameplay code there already
) pre-baked lighting and shading is adequate, you have terabytes of storage available, and sub-30Hz "interactive" frame-rates are ok with you... then yeah, hype4dayz...




#5183267 Source control with git hub/lab?

Posted by Hodgman on 27 September 2014 - 04:12 AM

Really? A new branch for every change? Never heard anyone doing that.

 
That's pretty much the basic rule when using git. You could even go as far as saying "master is for merging, not for developing". Just make it your goal that master is always in a decent state, builds and isn't a messy construction site.
 
We're using Gerrit at work, which requires all changes that get pushed to be reviewed before they get merged to master (plus, a build is automatically started to verify the project still compiles... in the future, we might make unit tests still passing a requirement as well).

We generally work on on our own local master branch day to day, but origin/master (the master branch on the central repo) is kept in a working state.
 
When you're happy with the state of your own local master branch, we run a script that pushes to a temporary remote branch and notifies the build server. That server then compiles your code and runs the game through a bunch of automatic tests on every platform. If all the compilation and testing passes, then the server pushes your changes into origin/master.
That's basically just a fancy system to ensure that no one pushes to origin/master unless their code actually compiles and has been tested first. You could do the same with pure discipline wink.png

Also, if you want to be nice to your co-workers, you clean up your own local master before pushing. e.g. if you've done a series of small commits relating to the same feature, you might use git rebase --interactive to squish a few of them together.
 
We only really use branches if you're working on a long running task, which can't be committed in part because it would breaks things, and multiple people have to collaborate on finishing it.




#5183132 code for doing something while the game is off

Posted by Hodgman on 26 September 2014 - 08:32 AM

Or run a server. The "while closed" logic runs on your server. The game (client) retrieves data from your server when the user plays it.




#5182800 New game in development

Posted by Hodgman on 24 September 2014 - 10:09 PM

Recruitment threads must be posted in the classifieds section.




#5182766 What kind of performance to expect from real-time particle sorting?

Posted by Hodgman on 24 September 2014 - 06:50 PM

Nope, the point of using bitonic sort is that it can be completely parallel. E.g. In the Wikipedia network diagram, each horizontal line could be it's own thread, performing (synchronized) comparisons at the vertical lines, resulting in a sorted buffer at the end-
320px-Batcher_Bitonic_Mergesort_for_eigh

Here's an explanation of the algorithm from a GPU point of view, but it's old, so their example implementation is a pixel shader, not a compute shader-
http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter46.html

I would assume that he nVidia and AMD sample collections would probably include a compute-shader implementation somewhere. If not, a quick google brings up some CUDA/OpenCL ones you could look at.

As for the bucket idea - you could try a counting sort / radix sort, which can also be parallelized.


#5182653 Handling Uniform Locations?

Posted by Hodgman on 24 September 2014 - 08:36 AM

I have some old hardware that doesn't have the explicit uniform locations extension (even though it is still good hardware), and i would like to support it. It is a pain to use glGetUniformLocation, it causes so much redundant code to be written. Also the fact that uniforms and such can get optimized out, thus return an invalid value (iirc), which you then have to check for to avoid triggering an opengl error, which just piles onto the redundant code. I was wondering if anyone had any tips or can share how they handle this elegantly?

Just pretend that you're using cbuffers / UBO's anyway, then emulate them using glUniform. Instead of creating actual openGL UBO instances, just malloc some memory instead to emulate them.
 
Make a struct containing a uniform location, an offset (bytes into a cbuffer structure), and the type (e.g. vec4, vec2...).  
e.g. struct VariableDesc { int location, offset, type };
 
For each shader, for each cbuffer, for each variable in the cbuffer, attempt to make one of these "VariableDesc" structures (failing if you get location of -1). You'll end up with a array of VariableDesc's per each shader per each cbuffer.
 
When you want to draw something with a particular shader (and a set of bound cbuffer/UBO instances), iterate through these arrays. For each VariableDesc item, read the data at the specified offset, call the gl function of the specified type, passing the specified location. e.g.
for i=0, i!=shader.numCBuffers; ++i
  for j=0; j!=shader.variableDescs.count; ++j
    void* data = ((char*)cbuffer[i]) + shader.variableDescs[j].offset;
    switch(shader.variableDescs[j].type)
      case TypeVec4f: glUniform4fv( shader.variableDescs[j].location, 1, data );
      ....
Now you can keep your engine simple, pretending that you're using UBOs everywhere, while emulating support for them on crappy old GL2-era GPUs without hard-coding any glUniform calls.




PARTNERS