Jump to content

  • Log In with Google      Sign In   
  • Create Account


Hodgman

Member Since 14 Feb 2007
Offline Last Active Today, 10:20 PM
*****

#5161022 A Dream at Work! Starting up a Studio.

Posted by Hodgman on 17 June 2014 - 05:36 AM

The proposal made is for an AAA Open World game for console platforms: PS4 and Xbox One to be specific. 

'AAA' isn't a very well defined term, usually meaning "over $10M budget" in my experience, or just the comparable quality to the other big-budget games ;-)

Having a brand new studio that's never worked together before, trying to produce a project like that, is a pretty big risk.

- What is the average number of personnel a small Development Studio [From software engineers, to programmers, animators, testers and so forth] needs to develop an AAA cross platform game in a time span of Two years?  What are the least number of people and their specialties needed?

A small console game team would be about a dozen programmers and a dozen artists -- though this isn't quite up to the "AAA" standard... More the budget developer who might produce a rough game that still might rate well or become a cult hit...
 
For bigger games, just check out the credits sections on games that you're comparing your own project to.

-Would the option of building your own game engine [weighing in the time factor in doing so] be better than using existing game engines out there?

If you have some senior/experienced console engine programmers on the team, and you're prepared to stay in pre-production for a year before ramping up into full production mode, then maybe ;-)

Otherwise, CryEngine/Unreal are very affordable these days, plus many workers already have experience with them...

-Are there top notch Game Engines out there that can create an Open World, sophisticated, modern videogame for PS4 and Xbox One?

Yes, assuming you've also got a highly experienced team of console game programmers to make proper use of it!

-I am aware that the cost can go up to 10’s of millions, so I risk in being candid that my projected proposal is between 3 to 5 million US dollars.  Have you heard of Studios that have done so with such a limited budget?

I've worked on a bunch of $3-5M console games, some involving big IPs but I wouldn't call them 'AAA'...

If you've got the budget up front, that's obviously a constraint on the planning. You may have to compromise a lot on the game/world in order to stay in budget.
Also, developing the game is only half the cost -- marketing/selling it can cost just as much as making it! Does your budget cover publishing costs, or do you plan on partnering with a publisher? 

-Is there an existing business plan out there I could use as an outline?

Its going to be extremely game-specific... You'll optimistically want a fairly complete design, and then a collaboration between the designer, a lead programmer/technical director, a lead artist and a decent producer / project manager.


#5160788 [DX11] Why we need sRGB back buffer

Posted by Hodgman on 16 June 2014 - 02:23 AM

The misunderstanding actually is with sRGB backbuffer. I thought that sRGB backbuffer is like JPEG in sRGB color space, meaning that all values in sRGB backbuffer are already Gamma Corrected (pow(value, 1/2.2)). If so, then final color values should outputted with pow(value, 1/2.2) correction

That's correct... but no correction is required to be performed when the buffer is outputted, because it's already been applied when the values were stored.
 
The display expect to receive data that has been encoded into sRGB format (approx pow(1/2.2)).
No matter what kind of back-buffer you use, the bytes in that buffer are sent to the display as-is.
JPEGs are stored with sRGB correction already applied, so JPEG data is sent to the display without any further modifications applied to the data.
 
If you're using an sRGB back-buffer, then when you draw into this back-buffer, the GPU automatically does the linear->sRGB conversion (pow(1/2.2)).
If you're not using an sRGB buffer, then you have to manually perform the pow(1/2.2) yourself in the shader code.
 
So:

  • pixel shader outputs linear data to sRGB buffer -> data is converted using the linear-to-srgb curve -> sRGB(data) sent to the display -> user sees correct/linear result
  • pixel shader outputs linear data to linear buffer -> data is not converted -> linear data sent to the display -> user sees incorrect result (~gamma 2.2 curve)
  • pixel shader outputs gamma-corrected data to linear buffer -> data is not converted -> gamma-corrected data sent to the display -> user sees correct/linear result
  • pixel shader outputs gamma-corrected data to sRGB buffer -> data is converted using the linear-to-srgb curve -> sRGB(sRGB(data)) sent to the display -> user sees incorrect result (~gamma 0.45 curve).

Either you output linear data to an sRGB buffer, which automatically applies sRGB gamma correction, and sends an sRGB signal to the display.
Or, you output manually perform sRGB correction yourself and send data to a regular buffer, which still results in an sRGB signal being sent to the display.
 

So, my question is why we need sRGB backbuffers plus modifying final output pixel shader if we can simply use non-sRGB texture?

You shouldn't do either of those things.
You should either use sRGB with no modification during output, or you should use non-sRGB with modification on output.
Your first, third and fourth images are all incorrect. The second image is correct.
 
The reason you think the first image is 'correct' is because "mathematically linear" is not the same as "perceptually linear". In order to perform correct lighting and shading calculations, or to be able to reproduce the same photograph that we captured earlier, we need all the data to be mathematically linear.
 
If you're painting pretty gradients, you don't care about maths and you just want it to look good, you care about perceptions, not mathematical correctness tongue.png So your test, of painting a black/white gradient, is not a very good test for this purpose. It turns out that "gamma 2.2" is also a pretty good approximation of the human perception of brightness! So the non-linear "gamma 2.2" gradient is perceived as being fairly even, even though it's mathematically curved.

 

In modern games, this perceptual part is taken care of by the tone-mapping algorithm. All of the lighting algorithms still require mathematical linearity though.
 
I don't know if it's because of JPEG artifacts, or problems with your program, but none of your images are quite correct. When measured at equal points along the horizontal though, I got these intensity measurements. As you can see, the 2nd image is the closest match to a linear gradient cool.png

 0%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, 100% -- Ideal linear gradient
 0%,  2%,  6%, 12%, 20%, 30%, 40%, 53%, 67%, 83%,  99% -- image #1
 0%, 16%, 28%, 39%, 48%, 58%, 67%, 76%, 84%, 93%,  99% -- image #2
11%, 44%, 56%, 65%, 72%, 78%, 83%, 88%, 92%, 96%, 100% -- image #3
 0%,  1%,  5%, 11%, 20%, 30%, 42%, 55%, 69%, 84%,  99% -- image #4



#5160726 Just a couple of Data-Oriented Design questions.

Posted by Hodgman on 15 June 2014 - 05:55 PM

assuming that components are always added to the end, and that deletion involves an actual erase-remove operation rather than just marking certain array elements as 'dead' and recycling them later

I wouldn't make that assumption. I'd probably default to using a pool to store the objects - a fixed-size array of elements, where the unused ones are in a free list. Erasing an object involves destructing it and then adding it's pointer/index to the free list. Allocating an object involves popping the front of the freelist and constructing the object there.

Not every system that works on transforms (say) needs to work on every transform. There may be certain systems that only need to perform operations on Asteroid transforms. This means that each component will have to identify the type of the entity to which it belongs, so that the system in question can check the type before operating on the component.

I see that as a huge compromise / "code smell", like a dynamic_cast in an OOP system... sad.png
You can avoid that pollution by reorganizing your algorithm to first select only the components that it's interested in:
1) For each asteroid as A, add A.transform to list of transform handles.
2) For each transform in this list, do work...
 
Or you can have more than one transform pool.

But suppose we're doing a data-oriented entity/system/component architecture, and suppose we have a system that performs some operation on transforms, but we only want Asteroids and BlackHoles to use it.

As above, don't do this by iterating through all the transforms... Iterate through the asteroids or black holes! Either
* for each asteroid, fetch asteroid->transform, do stuff
or, as above
*1. for each asteroid, add asteroid->transform to list of handles
*2. for each item in list, fetch it, do stuff

 

My intuition (and maybe this is a good intuition, or maybe it's brought on by an overdose of OOP-thinking) is that main, central pieces of glue code should not know about concrete types; the only code that should care about the behavior of concrete types are the concrete types themselves, and the central "glue" code that brings all of these types together should know only about the abstract classes, and be completely decoupled from the concrete types themselves.

I would say the opposite (In all of OOP/CES/DOD). All of the components should be fairly isolated and self contained, knowing as little about the outside world as possible.
 
The high level game code (which glues all the components together) is the part that deals with many, many different concrete types, plugging them together to form useful gameplay systems.
 

This find operation is going to necessarily entail jumping hither and yon through the component arrays, modifying them as called-for by the collision response.

After generating the list of collision events, you can sort the list by collider-index, so that then when processing the collisions, you're iterating through the colliders in index-order.
 

In traditional OOP, you often just have an array of EntityBase* and you just call a polymorphic Update method on each entity.

Just for the record, I would call this "Bad OOP", not "traditional" tongue.png Although, traditionally a lot of OOP code is bad laugh.png biggrin.png

Game companies still use OOP day-in/day-out, but a lot better than they used to (with things like DoD and component-composition in mind, rather than using the inheritance-will-solve-all-our-problems methodology). Also, a lot of the proper ways to use OOP were known in the 90's, but not well taught, so not popularly used throughout the 90's... The whole modern "component system" idea is actually a core OOP idea that's been ignored for a long time in many circles.




#5160402 using static initialisation for paralelization?

Posted by Hodgman on 13 June 2014 - 06:52 PM

On GCC 4.9 with full optimizations, this produces: 

foo():
	cmp	BYTE PTR guard variable for foo()::bar[rip], 0
	je	.L2
	mov	eax, DWORD PTR foo()::bar[rip]
	ret
.L2:
	sub	rsp, 24
	mov	edi, OFFSET FLAT:guard variable for foo()::bar
	call	__cxa_guard_acquire
	test	eax, eax
	jne	.L4
	mov	eax, DWORD PTR foo()::bar[rip]
	add	rsp, 24
	ret
.L4:
	call	rand
	mov	edi, OFFSET FLAT:guard variable for foo()::bar
	mov	DWORD PTR [rsp+12], eax
	mov	DWORD PTR foo()::bar[rip], eax
	call	__cxa_guard_release
	mov	eax, DWORD PTR [rsp+12]
	add	rsp, 24
	ret
It won't take a lock every single time, but it does check a global boolean. The gist is something like: 
if not initialized
  lock
  if not initialized
    set initial value
    initialied = true
  end if
  unlock
end if

I don't mean to second guess the GCC authors here, but isn't that the "double checked locking" anti-pattern?

What if the CPU reorders the first two reads, as it is allowed to do...? [edit] my mistake - x86 isn't allowed to reorder reads with respect to each other [/edit]

second:
	cmp	BYTE PTR guard variable for foo()::bar[rip], 0
	je	.L2
First:
	mov	eax, DWORD PTR



#5160298 using static initialisation for paralelization?

Posted by Hodgman on 13 June 2014 - 08:07 AM

No, function-scope statics are initialized only when the function s executed. Imaging there's a bool that's initialized before main, which is used in an "if !initialized" in every function call.


#5159940 Bell's theorem: simulating spooky action at distance of Quantum Mechanics

Posted by Hodgman on 11 June 2014 - 10:08 PM

Ignoring how you've come to set up your code above, the "game" that the code represents is:

Pick a number from 0 to 99. Is the number less than 82?

Repeat.

How often does the answer to that question match the previous answer?

 

There's 100 numbers, 18 result in yes, 82 result in no.

Two 'no's in a row is 0.822, or 67.24%.

Two 'yes's in a row is 0.182, or 3.24%

 

A 'no' and a 'yes' is 0.18*0.82 + 0.82*0.18, or 29.52%

Two 'yes's or two 'no's is 67.24% + 3.24%, which is 70.48%

 

So given 100 samples, a result of 70 or 71 matches and 29 or 30 mismatches is completely expected... The code produces the numbers that you've told it to produce... Which doesn't prove anything...

So far, this seems the same as saying: If you flip a coin, you will get heads 50% of the time... therefore aliens!

 

 

What is your simulation supposed to prove? You should probably try explaining it with math instead.

Is it that you've arrived at the same answer using two techniques, one of which is QM and the other is Malus' law?




#5159674 Spot Light Collision

Posted by Hodgman on 11 June 2014 - 12:25 AM

I'm using Bullet Physics and trying to create btConeShape according to the spot light information that I have

That would've been useful to post in the OP tongue.png

How do I create bounding frustum out of those variables to check for spot light collision?
How do I use these variable to construct a btConeShape in the Bullet physics library?

 
The btConeShape requires you to tell it the axis the cone is aligned with (by choosing btConeShape/btConeShapeX/btConeShapeZ), the height of the cone from pointy bit to the centre of the circle, and the radius of the circle.
I'm guessing your 'Range' variable is the height, 'OuterCone' is probably an angle in radians from that centre-line, which you can use to find the radius of the circle at 'range' distance away using trigonometry, and you can use your 'Direction' to built a btTransform that will rotate the cone to the correct orientation.
 
What have you tried so far? What part is the problem? Have you tried using bullet's debug draw features to see what's happening?




#5159647 glfwPollEvents on It's own Thread?

Posted by Hodgman on 10 June 2014 - 08:49 PM

I don't know about other platforms, but on Windows, that call has to be in the same thread that created the window.




#5159635 Alan Turing

Posted by Hodgman on 10 June 2014 - 06:05 PM

But we now have a non-thinking program which has passed the test. This means the original hypothesis is incorrect.

Or the specific test that has been 'passed', wasn't designed within the spirit of the original idea for the test...

 

Did Turing ever describe the test in detail? If someone asked him, "What if I design a machine via an intricate set of non-thinking rules, to parrot out statements that would fool a minority of people into believing the machine is an adolescent boy, assuming they only speak to it for less than 5 minutes", would he agree that this design fell within the spirit of his idea, would he conclude that such a machine would be "thinking"?

IMHO, when you say it out loud like that, it's pretty obvious that this is not in the spirit of the test, when you know the test is supposed to show evidence of thought... A machine that can integrate itself into human society, making friends and fooling them and coworkers into believing that it's a real person 24/7 is worlds apart from the above demonstration...




#5159207 cosine term in rendering equation

Posted by Hodgman on 09 June 2014 - 02:48 AM

That is an interesting question actually. The extreme case of an ideal mirror should be easy to answer but I realised I wasn't sure either...

 

I haven't pulled out my text books yet, but just doing a quick MSPaint doodle first happy.png ... the cosine term here is because (images below) in the top image, the orange 'ray' of light is spread over a wider area when it hits the surface at a shallower angle. The shallower the angle, the wider the area - we're fine with this.

dot(N, L), where N is tangent to the black line and L is outwards along any orange line, is the term we use here.

yDH7V6q.png

 

But shouldn't we also have some kind of cosine term when evaluating the viewer? Say that the viewer is looking at this surface flat on (i.e. dot(N,V)==1.0). That would make the viewer rays be the thin grey lines in the image.

In the top image, the light rays are spread out over a large area, but the view rays are much more concentrated.

 

Or alternatively, let's pretend that orange rays are view rays, and thin-grey are light rays. In the top scenario, the pixel that the viewer evaluates covers a very large area, whereas the pixel that the viewer evaluates for the bottom scenario covers a smaller area.

Should we be incorporating the projected-area of the pixel with respect to the viewer, as well as the projected-area with respect to each light? What would this term be? Would you divide the results by dot(N,V)?




#5159174 How much do programmers earn?

Posted by Hodgman on 08 June 2014 - 09:05 PM

Anywhere from $40k if they've taken a junior role at a small games studio as a sacrifice to get into an industry they love, because all the employers they want to work for either have no jobs, or don't trust a non-games programmer without a probation period in a junior role ..... to $250k+ if they've risen through the ranks of a large corporation and are in a key technical executive role...

 

You wanna supply some more information? tongue.png




#5159163 Mysterious Rigids On Rendered Textures

Posted by Hodgman on 08 June 2014 - 06:25 PM

In case anyone else comes across this thread, here's the official explanation and workarounds for the dreaded Direct3D9 half pixel offset issue:
http://msdn.microsoft.com/en-us/library/windows/desktop/bb219690(v=vs.85).aspx

If using -1:1 coordinates, then instead of -0.5, the offset would be -0.5/width and 0.5/height.


#5159071 Color grading shader

Posted by Hodgman on 08 June 2014 - 07:17 AM

There's a few issues that will affect quality in your shader.

 

1) You should be using linear filtering instead of nearest, so that each of your two samples is the bilinear interpolated result of the nearest 4 texels in the LUT. The mix at the end should only be based on the fractional blue value, as your 2D texture means that red & green will be usnig HW bilinear filtering, but blue won't so needs to be emulated.

 

i.e. when using a 3D LUT with linear filtering, the HW mixes 8 texels. Using the above scheme, the HW mixes 4 texels, 2 times, and then you lerp them together to get the exact same result. With your curent code, you only fetch 2 source texels and then mix them, which will give a much lower quality result.

 

2) You need to be very precise with your texture coordinates. The centre of the leftmost texel is at U = 0.5/256 (or U = 0.5/16 if the texture was 16 pixels wide).

Your current code is acting like, for the bottom layer (blue=0% layer) the leftmost texel (the red=0% column) is centred at 0.0 (0/256) and the rightmost (the red=100% column) at 0.0625 (16/256). Instead you should use the range 0.5/256 texels to 15.5/256 texels.

Likewise for height / green - it should be from 0.5/16 to 15.5/16 texels.

The artefact caused by this will be a very, very small loss of quality / a small rounding towards 0/255 and 255/255 / rounding away from 128/255.

 

You can verify this by saving a screenshot with and without your color grading applied - using a completely standard LUT texture, both screenshots should be bit for bit exact. If there's any changes at all, then your code or your LUT are slightly wrong.




#5159032 cosine term in rendering equation

Posted by Hodgman on 08 June 2014 - 02:06 AM

It depends what's in your cube-map.

 

If it's just a plain image -- 6 renderings of the surrounding environment, then what you're doing is basically treating every pixel in that image as a distant directional-light source. For this, you'll need the full BRDF as well as the cosine term (which is why your results are too dark). i.e. you'll need to have the full normalized-blinn-phong (or alternative distribution), the Fresnel term, etc, and the cosine term. n.b. here that the specular distribution function can return values that are >1.0!

Also, with this kind of input data, you can't just pick a single texel from the cube-map and use that to evaluate a single directional light -- you need to evaluate all of the texels in the cube-map (actually: the half of the cube-map where the cosine term is >0) or at least some large number of texels, usually chosen via importance sampling... Otherwise if you don't do this, any non-perfect-mirror material will be too dark, as you'll only be lighting it by a single directional light, instead of cubemapResolution2*6 lights!

 

Because this 'ground truth' approach doesn't work very well in realtime (that's a lot of lights!), usually we pre-process the cube-maps such that they aren't just regular images any more. You do the above steps ahead of time, basically producing a new lookup-table cubemap!

For some number of output directions (the texels in the resulting cube-map), you loop over all the source texels, run the full rendering equation using that output-texel's direction as the surface normal. You'll then usually use the output mip-level as the surface roughness, so when sampling the resulting LUT cubemap at runtime, you can get correct results for non-mirror surfaces too.

The other important parameter to the specular BRDF is the view/eye direction... unfortunately the LUT would take up too much memory if we also pre-computed every single permutation of view directions, so when doing our precomputations, we just make the view direction the same as the normal, etc... The "Real Shading in Unreal Engine 4" talk has some info on how they used a second 2D LUT as well as this cube-map LUT to correct for the fact that the cube was pre-computed only using a single assumed view-direction (which mostly results in more accurate Fresnel).

Now, when you sample from this cube-map in your shader at runtime, the BRDF and the cosine term have already been applied during precomputation, so you don't have to do it again in your shader happy.png




#5158708 Vertex Shader vs Pixel Shader - Where to do processing?

Posted by Hodgman on 06 June 2014 - 08:17 AM

^what he said

1) No. For low-poly meshes and small light sources, there will be large differences.

2) If going for correctness, whether the interpolated calculation is the same... If going for speed, whether the artefacts are acceptable compared to the performance boost.

3) Yes.






PARTNERS