Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 14 Feb 2007
Online Last Active Yesterday, 11:52 PM

#5003642 Whats a reasonable timestep

Posted by Hodgman on 23 November 2012 - 08:11 PM

First thing I noticed is that you're only using a millisecond accurate timer. For high speed animations at high frame rates, the error from that quantization will be significant (4% of a frame at 60Hz).

Someone mentioned earlier that bullet has interpolation built in, so that if you tell it to advance 1 and a half frames worth of time (e.g. 16.666*1.5 if @ 60Hz stepping), then it will perform one sim frame and produce correct visual results that have been interpolated an exta half a frame.

#5003518 What is the difference between CPU and GPU?

Posted by Hodgman on 23 November 2012 - 09:28 AM

You can think of a GPU of basically being a very-very-very-wide SIMD CPU.
Normally when you compute x = y + z, those 3 variables represent single values.
e.g. 2 + 2, results in 4.
With SIMD, those 3 variables represent arrays.
e.g. [2,7,1] + [2,1,1] results in [4,8,2].

Each instruction is simultaneously executed over a large number of values, so that you get more work done faster.

You want to avoid branching with this kind of architecture, because you end up wasting a lot of your SIMD abilities.

e.g. take the code
if( y > 5 )
  x = y;
  x = z;
If we execute that with our data of y=[2,7,1] and z=[2,1,1], this results in:
if( y > 5 ) [false, true, false]
  x = y; [N/A, 7, N/A]
else [true, false, true]
  x = z; [2, N/A, 1]
//finally x = [2, 7, 1]
The GPU has had to execute both the 'if' and the 'else', but ignoring some parts of it's arrays for each branch, and merging the results at the end. This is wasteful -- e.g. say the GPU has the capability to work on 3 bits of data at once, in this example it's only working on 1 or 2 bits of data at once.
The more nested branches you add, the more wasteful this becomes... so those kinds of programs are better of running on regular CPU's (or being redesigned to better suit this style of hardware).

In practice, the GPUs don't have any way to communicate with peripherals, just with the CPU
... the GPUs lack key features such as interrupts that are critical to implementing those programs in practice.

Out of interest's sake, GPUs can generate CPU interrupts and write to arbitrary addresses (which might be mapped to peripherals), but these abilities aren't exposed on PC's (outside of the driver).

#5003516 Are open pvp + full loot SANDBOX mmorpg's still possible?

Posted by Hodgman on 23 November 2012 - 09:18 AM

I think in summary so far we can say that it's not possible to have popular game if it's full loot and open open pvp.

At the time it was mentioned on the first page, DayZ (full loot, PvE, completely unrestricted PvP, permadeath) had 800K players in alpha. Now it's up to 1.3M players in alpha. At $30 to play, that's about $40M worth of popularity Posted Image

#5003494 Simulating CRT persistence?

Posted by Hodgman on 23 November 2012 - 07:55 AM

You only need one quad -- bind the "bottom" layer as the current render-target, then draw a quad textured with the "top" layer.
Rendering quads is indeed the standard way to do it - it's what the GPUs are designed to be good at. Most specialized 2D operations have been thrown out of the hardware these days.

Actually, it's often done with a single triangle that's large enough to just cover the screen, e.g. if the screen is the box:
| \
|  |\
but drawing quads is easier to think about ;)

#5003490 Whats a reasonable timestep

Posted by Hodgman on 23 November 2012 - 07:47 AM

you can solve this by not running at a fixed timestep

Variable-length time-steps are an option -- pretty much all of the commercial games that I've worked on have actually used variable-length time-steps -- however, they do have some downsides.

* The delta-time value that you're using for this frame is actually the amount of time that it took to process the previous frame. You're using this as a guess for how long this frame will take, but this guess is wrong in the case where the frame-rate changes.
If your frame-rate isn't constant, then when it jumps up and down, your animation will become jittery. This is because your estimation was wrong, so you presented an image to the screen where the virtual distance moved doesn't actually match up with the physical time passed. You can alleviate this somewhat by smoothing your delta-time measurements.

* Numerical integration techniques (such as pos += velocity * delta) gives different results depending on your time-step. This means that, for example, you may be able to jump a particular obstacle at 60FPS, but not at 30FPS, or vice versa! You can actually see this in some COD games, where you can jump to supposedly inaccessible places if you boost your frame-rate to 500FPS...
You can alleviate this by using better integration techniques, such as RK4 instead of Euler's method. However, if you're doing accurate physics, you pretty much have to choose a fixed time-step in order to get reliable results...

IIRC Bullet supports variable time-steps, but sternly warns against using them. However, it's possible to have some parts of your game run at a variable time-step, and other parts (such as your physics) run at some fixed time-step.

#5003438 Whats a reasonable timestep

Posted by Hodgman on 23 November 2012 - 03:53 AM

point out why you would tell someone to just increase his update rate without actually knowing what the underlying cause is.

Calm down.
No one here has told anyone to arbitrarily increase their simulation rates. Kunos is reacting to your hyperbole.
Surely saying that 48Hz is enough for everyone, without knowing what the underlying issue is, is just as offensive as retorting that statement with uncommon examples where 100Hz is justified? Aisde from racing games, street fighter fans would notice if their input was delayed due to a 48Hz sim instead of a refresh-rate sim. We can't make absolute statements in either direction without information.
Now, help nicely, and hopefully ic0de can tell us more about his simulation soon so we can find out why 120Hz 'feels' better than 60Hz to him.

#5003413 Whats a reasonable timestep

Posted by Hodgman on 23 November 2012 - 02:06 AM

FWIW, I've seen commercial racing games run critical parts of their sim at 1000Hz steps and other parts at 100Hz steps.

1/48th is usually the highest anyone would go

If your visual framerate is 60Hz and you don't want to have any input lag, then you'd also have to simulate at 60Hz.

#5003360 Help with String Hashing

Posted by Hodgman on 22 November 2012 - 08:29 PM

The memory usage isn't that bad - you're not likely to have megabytes of identifiers in your game.

Interestingly, many 'map' structures are 'hash tables' (the C++ one is a 'red black tree', but anyway), so when you check if a string is present in your container, the container my be accelerating that lookup by hashing the string.
So if you implement interning, you might also be using hashing somewhat ;)

As for speed, the cost of generating the hash of a string is relative to its length, whereas looking up a string in a sorted container is relative (logarithmically) to the number of items already in the container and the length of the string. This of course varies from container to container, e.g. Lookup in a hash table will be comparable to the cost of the hash!
In any case, the interning/hashing is ok to be expensive, as long as it results in cheap integer comparisons afterwards.

One issue with interning is that it generates different IDs for the same string depending on the order for which you requested your IDs. This means it's hard to make use of these IDs in save games, for example.
Also, if your interned string collection is created at runtime, then you don't have the option of generating IDs at build time. E.g. My asset names are hashed when the files are created, and assets that link to other assets refer to them by these hashes only. At runtime, filenames don't exist.
If a script wants to load an asset by name, it can run the same hashing algorithm at runtime, and get the same ID that was generated at build time.

#5003217 Help with String Hashing

Posted by Hodgman on 22 November 2012 - 06:25 AM

Basically, these kinds of systems boil down to:
int foo = TurnStringIntoIdentifier("foo");
int bar = TurnStringIntoIdentifier("bar");
int other = TurnStringIntoIdentifier(string("f") + "oo");
bool isFoo = other == foo;
bool isBar = other == bar;
It doesn't have to be done at compile time, just as long as you're not constantly calling TurnStringIntoIdentifier. As long as you cache the result in an integer variable, then the code's gonna be faster than using string comparisons everywhere.

Aside from hashing, another way to make one of these systems is via string interning, which boils down to something like:
container<string> strings;
int TurnStringIntoIdentifier( string input )
  if strings does not contain input then
	add input to strings
  return index of input within strings

For a basic version using hashing, the TurnStringIntoIdentifier is just a hash function. I often use the Fnv-1a hash, but there's plenty to choose from.
The problem with this basic version is that it's possible for two strings to return the same hash, which means when you compare the integers you can get false-positives. e.g. above, isFoo and isBar could both be true!

One way to deal with this is to still perform regular string comparisons, but only when the hash compare succeeds. If the hash comparison is false, you can skip the string comparison.

Another way is to simply assume that this event will never occur Posted Image
In my engine, in development builds, whenever I hash a string, I add it to a global map, with the hash as the key and the string as the value. If that key is already present in the map, then I assert that the existing value matches the string that I'm currently hashing. If this assertion fails, it means I've ended up in the unfortunate situation where two strings have generated the same hash. When this occurs, you can change the "salt" constant used in your hashing algorithm until the problem goes away.
In shipping builds, I skip this assertion checking, and assume that no strings will generate the same hash.

#5003209 What would you want from a zombie apocalypse simulator.

Posted by Hodgman on 22 November 2012 - 06:00 AM

what would you like to see in a zombie simulator?

DayZ (hardcore, lawless, survival), but without the cheating epidemic that ruins DayZ, and with more tools to allow for attempted civilized interaction with other survivors.
Also, infection. You get bit and you've got to hide it from your crew, lest they shoot you before your time is up.

The zombie setting lives from the doomed, forlorn setting where only a handful of people survived, there's no place for massive

Massive player counts could be cool if the world was also equally massive.
DayZ avoids crowding by 'sharding' 100,000 players into few thousand world-instances of 50 players each, so that on it's 225km^2 map, you've got a good chance of being alone. This works -- meeting another player is a rare event, and personally, I usually hide or run because of the 'stranger danger'. Can't trust anyone if they know you've got beans!

If instead, the world was 10000 times bigger and no 'sharding' was used, it could be pretty damn epic. It's common in the apocalypse setting for people to be hearing news of places 100's of miles away (via rumour, or radio, etc), which results in a "quest" to reach the source of that news, which takes a very long time, seeing that working motorized transport is now rare. This doesn't work in DayZ, because I can walk anywhere in the world in about an hour or two.

#5003199 Direct3D on ARM processor polygons overlapping

Posted by Hodgman on 22 November 2012 - 05:14 AM

I tried lowering the z distance to 500, but it was already only 5000 which is well below 16 bit.

Depth buffer values are stored logarithmically, which means approximately half of your precision is used to represent the values between the near plane and twice the near plane.
This is a simplification, but to illustrate -- with a 16-bit buffer imagine a pixel with depth of near == 0x0, a pixel with depth of far == 0xFFFF, and a pixel at 2 * near == 0x7FFF.
So, it's very important that you don't use a small value for the near plane. The far-plane isn't very important in comparison.

#5003170 A breakdown on company expenses when making games

Posted by Hodgman on 22 November 2012 - 02:46 AM

In the traditional developer/publisher model:
The developer doesn't spend any money on advertising.
A publisher wants to make a game, they approach several developers. The developers "bid"/"pitch" for the job of making it, by preparing some demo material and a production schedule with costs. The publisher picks the cheapest one, and starts paying for it in instalments (milestones).
Meanwhile the publisher is also planning their marketing / advertising campaigns. They'll probably spend an equal amount of money on advertising as development. Possibly more on advertising, because it's what actually makes them money Posted Image

#5003168 HLSL Pixel Shader that does palette swap

Posted by Hodgman on 22 November 2012 - 02:41 AM

If you're going to do this, it would be a whole lot simpler if the original sprite contained pallete-indices instead of colours.

You could pre-process your sprites once on the CPU to avoid having to execute that ridiculously huge loop in your pixel shader.

Then you'd just have to do "fetch sprite colour (which is an index), fetch colour from pallete at index", instead of "fetch sprite colour, search entire source pallets for colour to determine it's index, fetch colour from new pallete at index".

#5003136 why is it recommended to make game with placeholders and do art last?

Posted by Hodgman on 22 November 2012 - 12:09 AM

Who recommends that?
In a professional studio, you've got a whole bunch of full-time artists and programmers. If you're only making one game at a time, and you do the art last, then you'd be paying your art team to sit around doing nothing...

For a 1 man team, you don't have to worry about the above, of course.

#5003100 object-oriented vs. data-oriented design?

Posted by Hodgman on 21 November 2012 - 08:15 PM

Now my last sentence sounds all OO right? ... and it is ... but I got to that design from a data first process, that ENDS in OO, but doesn't start there.

There is a distinction to be made between "Data Oriented" and "Data Driven"; I think your post describes the latter, which is basically a facet of good OOD.

A better description of DOD than the one I gave earlier might be "think about the data first, then the code later".
Under that description, Xai sounds like he's focussing on (or orienting his thoughts around) the data as a primary concern.

e.g. with an OOD renderer, when setting material values, you might think "ok, I'll need a pointer to an IMaterialDataSource interface... what methods will it have?".
With a DOD renderer, you might instead consider that if the signature to your function was void SetMaterialData(const byte* allTheDataYouNeed, int size), then what would that buffer of bytes contain, and how would be be laid out?

Often, if you design around the data first, and then slap an OO layer on top of the data, then it becomes very data-driven as a side effect.

e.g. I've got an OOP interface for reflecting on shader programs, such as looking up a techqnique by name:
struct ShaderPack : NonCopyable, NoCreate
	int FindTechniqueByIndex( const char* name ) const;
	int GetTechniqueBlah( int index ) const;
But long before I wrote that interface and what methods it could have, I first sat down and designed a bunch of classes with no methods and just data. There's also other classes that can consume this data besides the above interface.
template<class T> struct ArrayOffset  { s32 offset; /*operators to act like a T[]    */ };
                  struct StringOffset { s32 offset; /*operators to act like a char*  */ };

struct Technique
	u32 blah ... ;
struct ShaderPackBlob
	u32                         numTechniques;
	ArrayOffset<Technique>      techniques;
	ArrayOffset<StringOffset>   techniqueNames;