Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 14 Feb 2007
Offline Last Active Today, 07:32 AM

#5215254 Getting out of the industry?

Posted by on 08 March 2015 - 04:56 AM

Send resumes to google, etc... Plenty of large corporates actually have good culture too, plus they'll probably double your salary compared to a games studio.

What would happen though if you simply say 'no' to the "voluntary" overtime?

In my humble arrogant opinion, endemic crunch occurs because not enough staff are willing to say no to abusive working conditions, unless their colleagues are already doing so. Without a union movement to present a unified stand, or other role models to lead the way, it's hard to be *the guy* who takes a stand.
If you're willing to quit anyway, and even willing to leave the industry, then you don't have much to lose by being the guy who stands up for his right to an 8/8/8 day.

FWIW, plenty of people agree that being forced to work 40+ hours per week is abuse. Where I'm from, overtime has to be voluntary, it has to be paid at double rates, and you have to be given an equal amount of time off in the future to recover. Failure to follow these guidelines results in a $30k fine per instance, for running an abusive workplace.

Needing to crunch in the first place is a management failure, and there's endless evidence as to why long-term overtime is actually counter-productive (another mamanagement failure!) so you should feel no guilt in refusing to be punished for their failures.

If you get a job offer from another company, you can always take it to your current management for a counter-offer. Tell them you'll stay if the overtime ends. If they care about you, you can keep your games job. If they admit that they don't give a shit about you, accept the new offer and don't look back!

#5215242 Vulkan is Next-Gen OpenGL

Posted by on 08 March 2015 - 12:44 AM

AFAIK GL on Apple is similar to D3D on windows -- there's a middle layer between the application and the driver that does most of the work, with the driver's then implementing a much simpler back-end API (on windows, you might call that the WDDM).

On other platforms, the driver implements the complete GL API itself, and there's no standard/OS middle layer between the driver and the application.

#5215127 What are your opinions on DX12/Vulkan/Mantle?

Posted by on 07 March 2015 - 07:39 AM

That sounds extremely interesting. Could you make a concrete example of what the descriptions in a DrawItem look like? What is the granularity of a DrawItem? Is is it a per-Mesh kind of thing, or more like a "one draw item for every material type" kind of thing, and then you draw every mesh that uses that material with a single DrawItem?

My DrawItem corresponds to one glDraw* / Draw* call, plus all the state that needs to be set immediately prior the draw.
One model will usually have one DrawItem per sub-mesh (where a sub-mesh is a portion of that model that uses a material), per pass (where as pass is e.g. drawing to gbuffer, drawing to shadow-map, forward rendered, etc). When drawing a model, it will find all the DrawItems for the current pass, and push them into a render list, which can then be sorted.

A DrawItem which contains the full pipeline state, the resource bindings, and the draw-call parameters could look like this in a naive D3D11 implementation:

struct DrawItem
  //pipeline state:
  ID3D11PixelShader* ps;
  ID3D11VertexShader* vs;
  ID3D11BlendState* blend;
  ID3D11DepthStencilState* depth;
  ID3D11RasterizerState* raster;
  D3D11_RECT* scissor;
  //input assembler state
  ID3D11InputLayout* inputLayout;
  ID3D11Buffer* indexBuffer;
  vector<tuple<int/*slot*/,ID3D11Buffer*,uint/*stride*/,uint/*offset*/>> vertexBuffers;
  //resource bindings:
  vector<pair<int/*slot*/, ID3D11Buffer*>> cbuffers;
  vector<pair<int/*slot*/, ID3D11SamplerState*>> samplers;
  vector<pair<int/*slot*/, ID3D11ShaderResourceView*>> textures;
  //draw call parameters:
  int numVerts, numInstances, indexBufferOffset, vertexBufferOffset;

That structure is extremely unoptimized though. It's a base size of ~116 bytes, plus the memory used by the vectors, which could be ~1KiB!

I'd aim to compress them down to 28-100 bytes in a single contiguous allocation, e.g. by using ID's instead of pointers, by grouping objects together (e.g. referencing a PS+VS program pair, instead of referencing each individually), and by using variable length arrays built into that structure instead of vectors.

When porting to Mantle/Vulkan/D3D12, that "pipeline state" section all gets replaced with a single "pipeline state object" and the "input assembler" / "resource bindings" sections get replaced by a "descriptor set". Alternatively, these new APIs also allow for a DrawItem to be completely replaced by a very small native command buffer!


There's a million ways to structure a renderer, but this is the design I ended up with, which I personally find very simple to implement on / port to every platform.

#5215105 What are your opinions on DX12/Vulkan/Mantle?

Posted by on 07 March 2015 - 02:53 AM

Apparently the mantle spec documents will be made public very soon, which will serve as a draft/preview of the Vulkan docs that will come later.

I'm extremely happy with what we've heard about Vulkan so far. Supporting it in my engine is going to be extremely easy.

However, supporting it in other engines may be a royal pain.
e.g. If you've got an engine that's based around the D3D9 API, then your D3D11 port is going to be very complex.
However, if your engine is based around the D3D911 API, then your D3D9 port is going to be very simple.

Likewise for this new generation of APIs -- if you're focusing too heavily on current generation thinking, then forward-porting will be painful.

In general, implementing new philosophies using old APIs is easy, but implementing old philosophies on new APIs is hard.


In my engine, I'm already largely using the Vulkan/D3D12 philosophy, so porting to them will be easy.
I also support D3D9-11 / GL2-4 - and the code to implement these "new" ideas on these "old" APIs is actually fairly simple - so I'd be brave enough to say that it is possible to have a very efficient engine design that works equally well on every API - the key is to base it around these modern philosophies though!
Personally, my engines cross-platform rendering layer is based on a mixture of Mantle and D3D11 ideas.

Ive made my API stateless, where every "DrawItem" must contain a complete pipeline state (blend/depth/raster/shader programs/etc) and all resource bindings required by those programs - however, these way these states/bindings are described (in client/user code) is very similar to the D3D11 model.
DrawItems can/should be prepared ahead of time and reused, though you can create them every frame if you want... When creating a DrawItem, you need to specify which "RenderPass" it will be used for, which specifies the render-target format(s), etc.

On older APIs, this let's you create your own compact data structures containing all the data required to make D3D/GL API calls required for that draw-call.
On newer APIs, this let's you actually pre-compile the native GPU commands!


You'll notice that in the Vulkan slides released so far, when you create a command buffer, you're forced to specify which queue you promise to use when submitting it later. Different queues may exist on different GPUs -- e.g. if you've got an NVidia and an Intel GPU present. The requirement to specify a queue ahead of time means that you're actually specifying a particular GPU ahead of time, which means the Vulkan drivers can convert your commands to that GPU's actual native instruction set ahead of time!

In either case, submitting a pre-prepared DrawItem to a context/commanf-buffer is very simple/efficient.
As a bonus, you sidestep all the bugs involved in state-machine graphics APIs biggrin.png

#5214873 Very strange FPS fluctuation

Posted by on 05 March 2015 - 08:19 PM

Even with microseconds, your clock may lose a second per 4 hours. I wouldn't be satisfied with a watch that did that :lol:
It might not seem like much, but may be enough to lead to bugs in long play sessions.

IMHO, absolute time values should either be:
* in a 64bit float (i.e. a double), in seconds, which provides the convenience of making all your blah-per-second math easy, and has the necessary precision to remain accurate even if the user leaves the game running for months.
* in a 64bit integer, in the CPU's native timer frequency (whatever QueryPerformanceCounter/etc is in), which is likely a fraction of a nanosecond. This is simpler in a lot of ways, but requires dividing by the CPU timer's frequency to convert from arbitrary ticks into time before using it for any calculations.

Delta time variables can almost always be 32bit - Either the difference of two absolute time doubles with the result truncated to float, or the difference between two int64's.

#5214713 Vulkan is Next-Gen OpenGL

Posted by on 05 March 2015 - 06:30 AM

NVidia has solid GL drivers... but AMD/Intel could probably save themselves a lot of time and money if they could just completely scrap their own GL drivers and just make Vulkan drivers instead.

That definitely was the case once, but I don't think I've had real trouble with an AMD or Intel driver in the past 5 years...
performance wise, NV still has a huge edge.
I don't imagine NV supporting an open source GL implementation, as it would mean giving up this advantage.

#5214664 Vulkan is Next-Gen OpenGL

Posted by on 05 March 2015 - 01:02 AM

It's much harder to debug a problem that locks up your entire system every time you try to analyze it.
Hopefully their validation layer is good enough to solve this issue.

I expect that when running in validation mode, every command in every command buffer will be sanity checked, so that it's impossible to crash the GPU -- it will just refuse to submit the buffer to the queue rather than crash. This would also include checking that every page of every pointer-range that you've supplied is actually mapped.


Now graphics corruption due to bad synchronisation... that's a different kettle of fish! :D

That might be fun as a pet project but otherwise I don’t see the point in subjecting yourself to the tortures that OpenGL driver writers had to endure for so long (and still will unless they got promoted).
The OpenGL API is significantly flawed, which is specifically why these kinds of major upgrades have been requested for so long(’s Peak).

NVidia has solid GL drivers... but AMD/Intel could probably save themselves a lot of time and money if they could just completely scrap their own GL drivers and just make Vulkan drivers instead. A reliable, open-source GL->Vulkan layer would be very handy for them :)

#5214652 Unity - Normals

Posted by on 04 March 2015 - 11:17 PM

For reference, you can also have a lot of trouble if  your art tools use different tangents/binormals than Unity. For best results, you should export your tangents/binormals from your art program and tell unity to import then rather than regenerating them.

#5214630 Vulkan is Next-Gen OpenGL

Posted by on 04 March 2015 - 09:39 PM

Explicit multi-device capabilities should be a standard part of all these next-gen APIs. Allowing dev's to implement SLI/Crossfire style Alternate Frame Rendering, split frame rendering, or other kind of workload splits, such as moving shadows or post-processing to another GPU, with the developer in control of synchronization and cross-GPU data transfers.

It also opens up the ability for one device to be used for graphics and another purely for compute, with different latencies on each device.


If Vulkan doesn't support this, I'll be quite surprised.

#5214370 What physical phenomenon does ambient occlusion approximate?

Posted by on 03 March 2015 - 11:27 PM

Ambient Occlusion is basically a shadow map for a sky/dome light. Using it for any other purpose is a clever hack, not an application of a physical concept.

You shouldn't really be using AO to shadow sunlight, unless your artists deliberately want to make that kind of non-physical choice for style reasons.

#5214333 Any tutorials on ray marching with shaders?

Posted by on 03 March 2015 - 06:32 PM

http://www.shadertoy.com/ has lots of example code, if that helps.

#5214217 Convenient approach to composite pattern in C++

Posted by on 03 March 2015 - 09:06 AM

You could write a horrible macro to condense your loop to:
DOLLAR(listeners, InformOfSomething(1, 2, 3, 4));

:D (obviously call it something other than $ :lol:)

#5214204 Vulkan is Next-Gen OpenGL

Posted by on 03 March 2015 - 07:58 AM

Microsoft is not listed as supporter (they didn't support OGL for years now,right ?), and if vulkan will hold its promises and mantle is the preferred console API, why should someone want to use D3D12 ? To support Win10 games?

Mantle is not a console API, at the moment it only runs on Windows, with plans for Linux/Mac support. 

Xbox360 uses D3D9x.

XboxOne uses D3D11x, but soon to be D3D12x.

Windows uses D3D9 for WinXP+, D3D11 for WinVista+ or D3D12 for Win10+... or OpenGL.

PS3 uses GCM.

PS4 uses GNM.

Linux/MacOS use OpenGL.


If you're a console dev writing a cross-generation console game, you'll by necessity be using D3D9x, D3D11x (soon D3D12x), GCM and GNM.

If you're a console dev writing a current-generation console game, you'll by necessity be using D3D11x (soon D3D12x) and GNM.


If porting to Windows, it will be easiest to port the D3D11x version to D3D11 (or soon, easiest will be to port the D3D12x version to D3D12).

Porting to GL2/GL3/GL4 is currently a nightmare, no reason to bother with that pain unless you really need Mac/Linux support.

Porting to Mantle is not too bad... it's similar to a mixture of D3D11 and GNM, kinda, sorta.

Porting to Vulkan/GLNext will be similar to porting to Mantle... but still much harder than porting your D3D12x Xbone version to Windows D3D12.

So for games where consoles are the lead SKU, I expect D3D12 will be very popular on the Windows ports simply due to development/maintenance costs.


Also, we'll have to see how well Vulkan's validation layer works out. Ideally, this will fix a lot of practical issues that have hurt GL adoption professionally in the past.

D3D has always had the advantage of MS being the sole author of the UMD layer, which acts as a validation bridge between the user code and the driver code, and almost completely protects the user from vendor-specific behaviors (also thanks to MS actually testing/certifying the vendor's driver implementations!)

With GL on the other hand, the user code communicates directly with the vendor's driver code, with no independent middle layer to offer any protection or implementation behavior guarantees.

Hopefully Vulkan solves this issue, but it's yet to be seen. Hopefully it's validation layer lets you easily identify non-compliant code and ensure you're doing everything according to how the spec says you should do it... and hopefully the "open source tests" are as good as MS's D3D driver tests, to keep the vendor's honest and reliable.

#5214185 Dynamic Memory and throwing Exceptions

Posted by on 03 March 2015 - 06:31 AM

Yeah I only have a single __try in the whole engine, internally the crash dump code.

Also, one of the reasons that you should try to make invalid code crash is so you don't need crash dumps :D

If the code crashes very reliably and predictably, a programmer can easily reproduce a bug while their own debugger is attached by following the steps provided by the bug reporter.

#5214169 Vulkan is Next-Gen OpenGL

Posted by on 03 March 2015 - 05:02 AM


Everything in there is good. Praise Gaben!

This is exactly what we were asking for, Khronos!


BTW, this paves the way for a true HLSL->SPIR-V compiler, evntually allowing people to use their existing source files biggrin.png