Quote:Original post by gbLinux
let's consider the best possible case scenario for server based approach, where server is located right in the middle. say, all the connections are the same speed and hypothetically perfect...
Ah, see, here's your problem right here.
All connections are not equal. Backbone connections are fast, low latency. The one between your computer and ISP's gateway is slow, oversubscribed, perhaps throttled, subject to QoS or bandwidth shaping or buffering. People using wireless cards also experience higher latency.
Quote:*** P2P approach
- it takes 30km packet traversal for each client to have information about all other
They also need to calculate physics
Quote:*** SERVER approach
- it takes 34km packet traversal for each client to have information about all other, plus the time server took to calculate physics, which can be what? 50, 100, 500ms?
That time is not part of latency, since communication is asynchronous.
As said, server does not update physics immediately to allow for fair-play, and tolerate players with high latency. Server is running 100ms in the past.
Quote:there will always be the limit on processing speed of the server, and no matter how fast calculation can be it would still take 'some' time,
Yes - and same goes for clients. In server based model, server will be 16-processor, server-grade piece of hardware. In P2P model, each peer will be a 5 year old notebook, running in power saving mode (not always, but too often), with 3% of server's processing power - but needed to do the same work as server!!!.
Quote:if for nothing else, but only to loop through all the clients and send packets to each... and then, the first client on the list has advantage over the last, the player with faster connection has advantage over the one with slow connection, or merely further away from the server.
Looping over clients takes zero time. And again, server simulates things in the past, so player with high latency is not disadvantaged.
All current server-based models are 'fair'. They are designed to take into account latency (whether 1ms or 100ms) and level the playing field. The different connection argument hasn't been valid since who knows when.
Quote:stuff would simply get updated as it arrives, again regardless of actual display frame-rate and independent of "slowest connection" syndrome,
"And then, magic happens..."
There is no slowest connection syndrome. Slowest client affecting all others is an artefact of network models being built for LAN - and having no latency compensation. It has not been an issue for over a decade.
"stuff simply gets update" is an unsolved problem right now. For example, very knowledgable people are
exploring how to make stuff happen.
Without listing all various methodologies.
- P2P can work without physics at all, or assume non-deterministic model, synchronizing as needed, perhaps incuring high latency and stalls
- Or, it can use fully deterministic model, and accept that slowest client will determine update rate (the way Age of Empires worked)
Quote:the collective experience would not generally suffer only when the slow connection player is on the screen, which might pass as "barely noticable glitch" compared to visual absurds that happen to each and every client with server-based approach.
In P2P model, everyone would need to wait for slowest.
In server model, glitches are used to guarantee fair play, but causing slow player to lag.
Quote:why is it not used more or for the high frequency update games such as first person shooters?
It's a conspiracy, lead by IPAP (Internet posters against P2P).
Quote:why starcraft
Licensing and control.
Quote:why not quake?
Carmack is pragmatic and gets things done by choosing simplest and most robust solution.
Quote:but, but, why server in the first place? who did ever come up with server concept and why?
A bunch of people who developed both, P2P and server-based models for 10 years, and finally concluded that server-based works better.
Look - there are a lot of real world gotchas which have shown that P2P has its own set of problems, some of which are, as of right now, unsolvable, or the solutions are worse than server-based implementations.
Half life and quake networking models are described in detail in FAQ, there are other resources and articles which mention the technical part. In real world, other issues occur, such as users being behind NAT or corporate firewalls, using shared WiFi, using routers with broken firmware, and so on....
It's not a conspiracy of incompetent developers, but 20 or so years of experience by people who have actually had to ship such titles, and hordes of frustrated help desk workers who had to deal with customers using them.
Broadcast, as it applies to LAN, does not work over WAN. P2P however does, and is sometimes used, except that Real World(tm) issues make it inconvenient and not viable.