Jump to content

  • Log In with Google      Sign In   
  • Create Account


wodinoneeye

Member Since 02 Dec 2004
Offline Last Active Jul 26 2014 05:41 PM
*****

Posts I've Made

In Topic: How to manage 100 planets?

26 July 2014 - 05:42 PM

You could have a standard profile for 'development' which would be a normal sequence of installations/builds that is appropriate for the uses you chose per planet.

 

Mining Raw materials  (mines, conversion facility, staff, transport)    similar for farming   specialize by resource per planet

 

Factory (factories, transport, workforce, serviceindustry)     variety/mix of what to prodce including military  - proximity to combination of raw materials    - specify what general type goods produced

 

Population center (transport, housing, serviceindustry,  luxury, cultural advancements, research)

 

A planet could have a mix of all above (or have to be singly specialized if you have 100 and sufficient mix to start off)

 

 

I actually got sick of 4X games that always pretty much had the same sequence of setup/advancement and usually to big a map to manage with too much repetative detail.

 

So this idea is to let the AI do the tedious stuff while YOU manage the decisions across your (upto) 100 planets to get/maintain a working mix of resource and to adapt as you grow.


In Topic: Game state synchronization techniques

26 July 2014 - 05:29 PM

Why have to store 1 sec/60 frames worth ?

 

Why not nearest 6th of a second?  (store 6 snapshots back)  as youre going to lag compensate anyway and its all transitory data you dont need perfect catchup animations for.

 

With this  much shorter set of snapshot data you might be able to make your per client store static (ditch the container overhead and use pointer rnath on the server)    Snapshot variable sizing just requires circular indexing maths...

 

Depending on how often this ACK-fail retransmit happens, couldnt you also have the per client snapshot on server be pointers/index to one full (common) set of snap data set (stored per actual object) to minimize the server memory?

If the failure+retransmits are chronic then all this overhead doesnt gain much over just forcing through current data state.

 

-

 

It also might be good to compress the object angles from float to Int16  to cut down you primary update data (65000 angle increments should be more than sufficient).

 


In Topic: MMOs and modern scaling techniques

19 July 2014 - 06:15 PM

 

Ping times are a source of complexity and gameplay challenges, but they are not a source of scalability problems.
 
Ping times for wired connections will not drop dramatically in the future, because they are bound by the speed of light -- current internet is already within a factor of 50% of the speed of light, so the maximum possible gains are quite well bounded.


I'm not sure if I'm misreading you. But I feel like what you said is very misleading. The real world performance of network infrastructure is not even slightly approaching 50% light. We typically max at 20% in best case scenarios.

The majority of transit time is eaten up by protocol encoding/decoding in hardware, and improving the hardware or the protocol can dramatically increase transit latency. Ex. Going from tcp to infinband inside a cluster can reduce latency from 2milliseconds to nanoseconds.

Not saying it's practical by any means, but we're bound by switches/protocols far more than light.

 

 

And by the number of 'hops' along the way of the path the data takes (repeating the above overhead over and over).

 

Maybe in the future we will have a more 'Johnny Canal'  (https://screen.yahoo.com/johnny-canal-000000884.html) type Internet system

with fewer hops but well that costs lots of cash ...


In Topic: MMOs and modern scaling techniques

19 July 2014 - 06:05 PM

 

Higher complexity can  require scaling to be even greater (not linear) than previous simpler games.

 

(Good) NPC AI for example with farmed-out AI processing -- and those seperate NPC AI computers having to maintain their own local world representations - with all the volumes of world map updates flowing (hopefully across a high speed server network).  Now with the greatly increase data traffic through individual world map zone servers (the state book keeping process) that starts overwhelming/burdening them (requiring yet more of them and the overhead of the zone/area edge handling - for large continuous worlds)

 

Communication bound limitations as a secondary effect to the data processing bound (and AI uses magnitudes more CPU and significantly more local data per 'smart' object)

 

The new complexity can require another O(N^2) expansion as there are more interactions across CPUs handling fewer and fewer objects each (and the traffic having to go across the much slower network interface instead of within the same shared memory space)


In Topic: Procedural Animation in a MMO?

19 July 2014 - 05:32 PM

Now compressing motion data using equations versus  bone time/angle tables (versus tweening keyframes) could be very useful when you have lots of objects and a limited data bandwidth (all precalculated data)   could be quite important as World complexity increases.

 

I suppose with data streaming,  through server, you could add dynamic motion data creation by the server or for each avatar by its own client, which then is sent upstream to be distributed to other multipile clients(multiple viewers of the same action).

 

OR

 

Semi static generation (by server ) to make various reused action animations change over time with different variations (keep 3 or 4 different ones cached so objects in the same vicinity doing exactly the same high level thing would do so with some variation)    Pipe the (hopefully nicely compressed) equation/coefficient data to the clients.


PARTNERS