Jump to content

  • Log In with Google      Sign In   
  • Create Account

Hodgman

Member Since 14 Feb 2007
Online Last Active Today, 08:50 AM

#5292389 Game Prices on Steam: should there be regulation/guidelines?

Posted by Hodgman on 18 May 2016 - 06:56 PM

Valve does regulate prices on steam. It's not an open marketplace where the developer gets to make these decisions unilaterally - it's the developer working with Valve as their publisher.

 

Personally, seeing a $2 game on Steam would damage it's perceived value for me, as mentioned above. I would be prejudiced against it.

Mobile games trended towards $1, because they're a lot smaller and simpler than traditional PC/console games.

If angry birds was on PC, it would not make many sales with a $60 price tag. It has a low price because it's not comparable to your typical PC game.

 

Sure there's a lot of competition on Steam, but I don't think we'll see a race to the bottom like we did on mobile, because

- PC gamers tend to want larger, more in-depth experiences which can't economically survive off a $1 price tag.

- PC gamers aren't as tolerant of in-game advertising, which is the mechanism through which most mobile games make their money these days. The typical successful mobile game these days is free to play, but earns millions off ads. That business model doesn't translate to PC very well.

- PC gamers are accustomed to new games costing $60.

 

The last one is pretty important. New release console games in Australia have always been AU$100+... When the US$ crashed, making 1AUD worth more than 1USD, did retailers drop the price of games here? No, they kept selling them at AU$120 -- double their retail price in the USA, because that's what the local market was accustomed to paying. Even on Steam, there's region specific pricing, so the price of a game can be doubled if they detect that you're using an Australian IP address / credit card...

 

Also, more importantly, simply dropping the price of your game from $10 to $2 is not going to get you 5x more sales, which means it's not a smart business decision. PC game prices tend to be elastic, but not perfectly so.

It's a better decision to spend money on marketing to actually increase sales. This is what AAA games do -- they spend $10M making the game, and then $10M on telling people to buy it. From that $10M marketing spend, they might generate 2M sales (which means a user acquisition price of $5/user)... At $50 each, that's $100M retail, $70M wholesale, profit of $50M after deducting costs, and $25M after tax. On the other hand, if they released it for $2 and spend nothing on marketing, they would get somewhere between zero and 2M sales, bringing in $0 to $4M retail, $0 to $2.8M wholesale, and a loss of $-10M to $-7.2M after deducting costs...




#5292259 Starfield with 3D points: ideas on how to create light effect around the star...

Posted by Hodgman on 18 May 2016 - 06:10 AM

Yeah I would try point sprites / small quads too.

 

I was coincidentally thinking about a sky renderer recently and one idea I came up with was:

  • Import a star catalogue with coordinates, intensity values.
  • Convert coordinates into 3D directions in the game-world's coordinate system.
  • Create a cube-map of sufficient resolution, fill with black pixels. Make it a floating point texture for simplicity (not required though).
  • For each star, find the texel in the cubemap in the direction for this star.
    At that location, write the exact (x,y,z) direction into RGB, and the intensity in Alpha.

At runtime, to render the sky:

  • For each background pixel, ray get the direction away from the camera.
    Rotate this into the coordinate system that was used to generate the cubemap (e.g. if on the surface of earth, this will vary with time of day / month / location).
  • Fetch the N closest cubemap texels to this direction. If you use N=1, this is a single point-filtered texture fetch.
  • For each texel, calculate the angular difference between the ray's direction and the stored XYZ value (dot product). Define a gaussian centred on this XYZ direction, with a width defined by the star's intensity (texel Alpha value). Compute where the view ray intersects this distribution. Add that value (scaled by the star's intensity) to this view-ray's colour.

This should give a guassian-blurred starfield (similar to a post-process bloom, but with better anti-aliasing) without any of the blocky/pixely texture artifacts you'd get from a low-resolution cubemap. The downside is that with a low-res texture, you can't have two stars appear close together... The cubemap simply has to be high enough resolution that no two stars try to store themselves in the same cubemap texel... If that does happen, then just store the brightest star only. You can also adjust cost/quality by increasing/decreasing the width of your guassian distribution, though as you make it wider, you have to increase the N value as well.

 

As an extension, use an R8G8B8A8 cubemap, storing the direction in Crytek BFN format, and the intensity in some kind of logarithmic encoding (or another way to compress this data smaller - e.g. 2D delta from the texel direction and a 16bit intensity). Also, instead of intersecting a ray with the Gaussian curve, integrate the intersection of a screen-pixel sized cone (defined by the pixel's solid angle) with the star's guassian distribution, which will give much better / anti-aliased results when zoomed out. Can combine with another background cubemap for general starfield glow (milky way shape), star colour tints / etc...




#5292257 maybe it could, possibly, this pass, do THIS sort of thing, for now

Posted by Hodgman on 18 May 2016 - 05:54 AM

Do you have a producer or project manager, as well as the designer? I'd probably get cranky too, and go ask the producer whether we're prototyping / exploring, or working towards a known target / deadline :P

That kind of weak design is basically asking you to do the designing for them... which can be fine, if you know that's what you're supposed to be doing. I've worked on games where the programmers have knowingly been given only broad strokes and have been responsible for all the fine details, and in one case they developed new genre-defining gameplay systems. So it can be good to have the people with the power of implementation and iteration to have ownership over parts of the design... as long as they know they've got that responsibility. In other situation, the producer might be pissed that the programmers are faffing about in exploration/iteration mode, and the designer is wanking onto a page while a milestone deadline looms overhead...




#5292254 VR, AR resources and development steps

Posted by Hodgman on 18 May 2016 - 05:30 AM

For 2/3 are you talking about developing new VR hardware, or the VR software part of your game?

There's no need for a game-dev to be making VR hardware - there's going to be a handful of very good VR HMD brands servicing that market with high quality products; Oculus and HTC to start with. Plus there's the PS VR, which is the only hardware choice for Playstation developers. If you do want to though, there's DIY HMD kits available for ~US$30, favored by drone racers and those with shoebox apartments :lol:

 

The hardware cost to developers is fairly minimal for any game-dev as a business (as opposed to game-dev as an expensive hobby). An amazing HMD such as the Vive is equivalent in cost to hiring a contract programmer for a day or two.

I've been supporting VR since the Oculus DK1 in 2012, and the dev-kits have been around $400 each (or much cheaper 2nd hand!), which is half the price of the final consumer version. We have an Oculus DK1, two Oculus DK2's, an Oculus CV1 (current consumer version) two HTC Vive prototypes and a Sony PS VR prototype -- all up costing us about US$1100. That figure is so low because some of those companies have been nice enough to give us free hardware, as they really want gamedevs making products that support them. Just look at the dev list that Sony put out for PS VR - that's a lot of PS VR units in the hands of devs already.

 

As for the software side, if you're using a decent engine, such as Unity or Unreal, then most of the work has already been done for you.

The hard parts are:

  • Making VR-friendly gameplay -- usually that means cockpit based games, or ones with minimal movement... Although a bundled Oculus title is a 3rd person adventure, so that doesn't always apply.
  • Making a VR-friendly camera. It should be mostly HMD controlled and never surprise the player.
  • Never drawing any UI stuff directly to the screen. The easiest solution is doing traditional 2D GUI's, but rendering them them to a texture that is then placed on a quad hovering half a meter in front of the player where they can focus on it easily.
  • Making sure that your game always runs at a solid 90 frames per second on your minimum-spec PC hardware, with two full-HD viewports... That's a challenge if porting to VR, but performance budgets are manageable if you're aware of them during the entirety of the project's lifecycle.

If you keep those pitfalls in mind from the beginning, then the impact on your software is pretty minimal.

If you're using your own engine, then a simple integration with the Valve or Oculus SDKs (the Valve SDK works with both Oculus and HTC hardware, so you can just use the one) only takes about a day. Call it a week if you want to get it perfect. Of course, if your game already has a bunch of GUIs / cutscenes / camera-controllers / etc, then you've probably also got a bunch of re-work to do.

 

For 4 - The price is pretty high at the moment. IIRC, Vive is $800, Oculus is $600 (but the later to be released "Oculus Touch" controllers will probably bring it up closer to the Vive's price) and PS VR is $400. For the Vive/Oculus, you should probably also spend $1000 on a new GPU if you haven't already! :lol: This is going to restrict the userbase in the near-term... although the number of orders for Vive/Oculus are still pretty impressive so far!

IMHO, this means that PS VR is the one that's most likely to see anything close to mass market penetration any time soon. For $800 total you can get a console, HMD and motion controllers -- compared to more like $2800 for a high end gaming PC, HMD and motion controllers. Playstation is also selling a product that anyone can easily set up in their living room - it's a lot more mass-market friendly than anything attached to PC-gaming.

These high prices are to be expected though -- adjusted for inflation, the original NES, PS1 and PS2 were all about $400, and the PS3 closer to $600. Early adopters pay a premium, and in the later years, the prices start to fall.

 

As for mass market penetration, the big picture is not actually about gaming. Oculus and HTC are not gaming companies. HTC is a consumer electronics company. Oculus is a VR company that's owned by an advertising / social media megacorp. Oculus' business is not to make a mass market gaming device; it's to bring VR products to a mass market in general. Gamers are simply their early adopters who are willing to pay a premium for the first iterations of their products. You can be sure that they want games to eventually only be a small part of their business, as they expand into every other kind of business activity that can be supported by VR -- largely social experiences. They're also working towards full body VR, with eyes, ears, head and three fingers being just their very first stepping stone towards that goal.

[edit]Or check out this picture from google today -- Sports, TV and everyday applications sharing equal placement with VR gaming[/edit]

 

As well as the above HMD's, there's also the GearVR (and "cardboard"/knock-offs), which are $100 for the real deal, or $2 otherwise. These are going to also be part of the mainstreamification of VR, as the barrier of entry is much lower (assuming you've got a nice new Samsung phone in your pocket already) and there's already hundreds of "experiences" that interest non-gamers available on there.

 

As for AR, there's of course the hololens, but smartphones are also going to seamlessly grow into this marketplace. The GPU, internet, sensors, camera and screen in a modern smartphone makes it perfect for AR applications. Future hardware and software advances will cement it.

That's going to create two kinds of AR -- the hand-held window type you can get from a phone/tablet, and the head-mounted type. The latter is going to be more expensive / less accessible, probably piggybacking off VR for the near future... At least until people decide that google glass is cool :D




#5292219 Theory behind spherical reflection mapping (aka matcap).

Posted by Hodgman on 18 May 2016 - 12:40 AM

Yeah it's a kind of length formula... the +1 makes one side of the sphere lie on z=0, and the other side lie on z=2. That link is just an intermediate term in the whole formula. The full formula maps the front of the sphere into a circle, and the back of the sphere into a ring surrounding that circle. So the front of the sphere becomes the center of the image, the horizon becomes a circle about two thirds between the center and the edge, and the back of the sphere becomes a circle at the outer edges of the image.




#5292191 Vulkan and mipmap generation

Posted by Hodgman on 17 May 2016 - 07:55 PM

On some of my back-ends, I implement mip-map generation using compute - which turned out to be faster than my PS based approach. The CS optimization that I used was to calculate and generate 3 mip levels per dispatch, which greatly reduces the number of passes required to compute the full chain.

i.e. I read 8x8 pixels, output 4x4 to the 1st RWTexture, 2x2 to the 2nd, and 1px to the 3rd.

 

You're right though that this graphics->compute->graphics transition could be quite bad on some GPU's... so maybe I should have two code-paths -- this CS mipping for some GPU's, and a PS fallback for others...




#5292050 Are there any services for reducing network delay/latency?

Posted by Hodgman on 17 May 2016 - 06:36 AM

For example: You might find that all of your traffic out of your servers in Singapore is routed through California, no matter the destination. You might then have no choice but to designate SG->Calif->NY->London->Dubai as your fastest SG->Dubai route... However, if you make a deal with an SG ISP, you may be able to get your traffic onto a different cable that goes in the other direction around the world, cutting the trip in half.




#5292047 Material, Shaders, Shader variants and Parameters

Posted by Hodgman on 17 May 2016 - 06:22 AM

Thanks for the replies! I forgot to mention I'm currently targeting OGL 2.0 and DX9 where UBOs are not available, that's why I've chosen this ugly, old-school design. :) However I'm thinking on "upgrading" to OGL >= 3.0 and DX >= 11.0 since I'm using Deferred Shading in the engine. And if the hardware can handle deferred shading it probably supports at least OGL 3.0 as well.

I support D3D9 under my API by emulating CBuffers on top of it :)

My permutation selection function looks like below.
The algorithm is not very intuitive at first, but it does guarantee that you never deliver a permutation that has options that were not requested, and also guarantees you do deliver the permutation that most closely matches the request, which is exactly what we want.
e.g. imagine someone asks for normal mapping to be enabled, but this technique/material does not support such a feature -- your string lookup will fail in this case, whereas this algorithm simply ignores that invalid request.
It does require you to pre-sort your list of permutations/bindings/effects by the number of bits set in their options mask, from highest to lowest, and to always end the list with a permutation with no bits set (an 'all options disabled' permutation), which will always be a valid fallback result (meaning that return -1; at the bottom should never be reached). Unfortunately it's an O(N) search, but N is the number of permutations in the pass, which is usually very small, so that's not really an issue. Your dictionary lookup could in theory be O(1) or O(N logN), yet it's likely waay slower than this -- e.g. 8 of these Permutation objects will fit in a single cache line, which is a single RAM transfer, so if you've got a handful of permutations the linear search may as well be O(1) :wink: You should also do this searches once (ahead of time) and reuse the result every frame until your material is edited -- if you follow that advice it doesn't really matter how expensive the permutation lookup is! :D

int internal::ShaderPackBlob::SelectProgramsIndex( u32 techniqueId, u32 passId, u32 featuresRequested )
{
	eiASSERT( techniqueId < numTechniques );
	Technique&         technique    = techniques[techniqueId];
	List<Pass>&        passes       = *technique.passes;
	if( passId >= passes.count )
		return -1;
	Pass&              pass         = passes[passId];
	List<Permutation>& permutations = *pass.permutations;
	u32 prevPermutationFeatureCount = 64;
	for( u32 k = 0, end = permutations.count; k != end; ++k )
	{
		Permutation& permutation = permutations[k];
		eiASSERT( prevPermutationFeatureCount >= CountBitsSet(permutation.features) );
		prevPermutationFeatureCount = CountBitsSet(permutation.features);//debug code only - check the array is sorted correctly

		if( (featuresRequested & permutation.features) == permutation.features )//if this does not provide features that weren't asked for, we'll take it
		{
			return permutation.bindingIdx;
		}
	}
	return -1;
}



#5292046 Theory behind spherical reflection mapping (aka matcap).

Posted by Hodgman on 17 May 2016 - 06:00 AM

That's the spheremap transform (or, the 3D direction to 2D texture coordinate half of it). It even used to be part of the fixed function pipeline, back before cubemaps were standard. See: ftp://ftp.sgi.com/opengl/contrib/blythe/advanced99/notes/node177.html

 

You can use it with sphere normals, reflection directions, or any direction. It just maps the entire surface of a 3D sphere into a single 2D circle -- much like how a cubemap maps the entire surface of a 3D sphere into six 2D squares.

 

The right way to look up a matcap depends on how it's generated. Spheremapping allows the full sphere to be saved into the matcap, including the back. This is important for reflection maps, as the reflection vector for the edges of an object will point directly away from the camera.

Other times, maps might be authored for lookup using the view-space normal, e.g. for diffuse matcaps.

Yes, some might also be authored for lookup with a simple "uv = normal.xy*0.5+0.5" instead of a spheremap -- this is a hemi-spheremap: a transform between a half sphere and a 2D circle. There's also parabola maps, which do the same thing with different properties. These are most popularly seen used as a dual-parabola map, which encodes a full sphere instead of a hemisphere - e.g. http://graphicsrunner.blogspot.com.au/2008/07/dual-paraboloid-reflections.html




#5291993 Material, Shaders, Shader variants and Parameters

Posted by Hodgman on 16 May 2016 - 11:35 PM

Yeah the model of shader-instances/effects/etc actually being able to hold parameters (uniform values) is an abstraction that makes sense for 2006's GeForce 7900 GT... but not for anything newer. The reason it makes sense for that old era of GPUs, is that many didn't actually have hardware support for uniform values at all, but they did support literal / hard-coded values... so uniforms were implemented by actually patching the shader code itself - meaning the uniform value was stored inside the program object.

 

The way the hardware actually works after that point in time, is that you bind shader-programs to the context, which are just code (no uniform values), and you bind resources to the context, such as textures, vertex-buffers, uniform-buffers, etc...

So, I would definitely recommend making a system based around UBO's, not individual uniforms. A UBO-based design will also port easily to D3D/Vulkan/etc, while a uniform-based design will require emulation on these other APIs.

 

What I've got, roughly:

Shader (pixel/vertex/etc) -- not exposed as part of the API / this is an internal implementation detail.

ProgramBinding -- a set of vertex + pixel/hull/domain/geo shaders. This is at a low-level not really visible to users - it's the "shader object" that the low-level renderer works with.

Technique -- this is the "shader object" that the user actually works with, it's has a collection of passes, that the user can refer to by name or index, but there's no class associated with them.

Pass -- this is one "aspect" or "stage" of a shader -- e.g. ForwardLightingPass, GBufferAttributePass, TranslucentLightingPass, DepthOnlyPass. A technique can have multiple passes, because in a modern engine, the same object will be drawn in multiple stages -- e.g. in a Z-pre-pass followed by a forward-lighting pass. A pass also has a list of permutations. Each permutation is a combination of option values (key) mapped to a ProgramBinding (value).

Options -- When authoring shaders, you've got a 32bit field that you can allocate "shader options" within -- e.g. bit #0 could be normal-mapping on/off, bits #1/2 could be "light count [0,3]". The shader compiler iterates through every permutation of options for each pass and compiles all of them into the appropriate ProgramBindings. At runtime, the user can (indirectly) select shader permutations by supplying option values alongside their technique.

i.e. If the user has a technique, a pass index, and a set of shader-option values, then they can look up a ProgramBinding.

 

Material - not necessarily a single class - different systems can implement their own material solutions. A material will be a technique, a set of shader options, fixed-function state overrides (e.g. blend modes) and resource bindings (textures, UBO's, etc). Techniques provide reflection tables, where a material system can find uniforms by name (i.e. UBO-slot/location and offset into that UBO), texture-slots/locations by name, etc... Material classes can use this data to allow the user to set values by name, while they're actually maintaining UBO's of values.

 

RenderPass - a set of destination textures (FBO's) and a shader Pass index. The pass index is used to find the appropriate ProgramBinding's from each Technique.




#5291970 What's the advantage of Spherical Gaussians used in The Order vs. Envirom...

Posted by Hodgman on 16 May 2016 - 05:23 PM

So, you're suggesting a lightmap, where each "texel" of the lightmap is actually 3x3 texels, containing an environment map (I'm guessing traditional spheremap)?

First off, those traditional maps are circular, so you'd have to square the circle to avoid wasting your corner texels, plus a 3x3 spheremap would have one "straight up" sample and a ring of 8 "down and outwards" samples, which is not a very uniform sphere coverage.

At the end of the day, you're still storing 9 light values with 9 hard-coded directions.

 

The difference is that your 9 directions are an arbitrary choice stemming from your choice of env-map layout, and in the SG method they were able to very carefully pick their sample directions and the lobe widths of those samples, in order to get the best amount of detail out of this small sample count as they could.

Also, IIRC, their 9 samples are actually 18 directions: + and - along each sample direction, where the direction that's above the local horizon is the one used. I might've imagined that bit tho...

 

So, you're trading efficiency for quality.




#5291969 DirectX12 Draw Auto replacement

Posted by Hodgman on 16 May 2016 - 05:09 PM

I guess you'd use ID3D12GraphicsCommandList::ResolveQueryData with D3D12_QUERY_TYPE_SO_STATISTICS_STREAM0 to copy the SO counter into your own buffer, which you could then use with draw indirect.




#5291783 Learning from Other's Code

Posted by Hodgman on 16 May 2016 - 01:11 AM

Learn from more than one. Chances are they won't solve anything the same way, which means now you've got two great ideas to copy, and you can't copy both :P




#5291749 Proper output buffering algorithm

Posted by Hodgman on 15 May 2016 - 06:55 PM

Can you make the loop's idle state have a timeout - so while waiting for commands, it will also wake up on it's own if no command is received within a certain amount of time?




#5291741 PBR 3D Models

Posted by Hodgman on 15 May 2016 - 05:23 PM

Most tools/engines are converging on two different workflows for specular maps.

 

In "traditional" game art, you usually had:

Specular Power: size/shape of the highlight

Specular Mask: intensity of the highlight.

These had lots of different names, such as gloss maps, spec-color maps, or just specular maps... but it usually boiled down to a power value and a mask value.

 

In PBR, the F0 ~= mask/spec-colour, and roughness/glossiness ~= power.

 

The two new workflows for authoring these values are the "spec/gloss" workflow and the "rougness/metalness" workflow.

Spec/gloss is very similar to the traditional workflow -- the monochrome gloss map controls the size/shape of the highlight, and the RGB specular map contains F0, which acts very similarly to a traditional RGB mask value. This workflow is easy for traditional game artists to understand due to the similarity. The main difference is that traditionally, artists put a lot of details into their masks/spec-colours, when they should now be putting detail into the power/roughness/gloss maps instead.

Metal/rough is different, but IMHO simpler and more intuitive -- the monochrome roughness map controls the size/shape of the highlight, but in a slightly different way (in some engines, it's inverted, so black = small highlights and white = large highlights), and the monochrome metalness map indirectly specifies the F0 value. If metalness is zero, then the F0 value is some hard-coded non-metal value, such as vec3(0.02, 0.02, 0.02), otherwise if metalness is one, F0 is the value stored in your material colour map. Moreover, metalness also affects your diffuse colour! If metalness is zero, the diffuse colour is the value stored in your material colour map, otherwise if metalness is one, the diffuse colour is black.

This is because pure metals should have bright RGB F0 values and black diffuse values, and non-metals should have monochrome, dark F0 values in approx the 2%-4% range, most of the time. So this workflow makes it harder for artist's to create "impossible" materials -- such as having bright a blue diffuse colour and bright red F0.

 

So with Spec/gloss, you'd have diffuse colour (RGB), specular colour (RGB) and glossiness (mono).

And with Metal/rough, you'd have material colour (RGB), metalness (mono) and roughness (mono).






PARTNERS