Large geographical area network

Started by
26 comments, last by evillive2 11 years, 5 months ago
@starbasecitadel
You got what problem I'm trying to solve, and also the general solution I was thinking about(my last 2 reply's). Thanks for making it more clear(as I said, I do not have much network knowledge).

I don't speed of light should be an issue. San Francisco to NY theoretical limit RTT is 40ms(calculated based on: distance, speed of light and data travel through fiber).
If we are talking about game servers center in central location, and client-server architecture, the data only need to travel half of that distance. And we are talking about coverage of the entire USA under one player pool(i.e. one game servers center).

I don't think anyone need 20ms-50ms RTT in modern game(especially with all the modern in-game lag compensations mechanisms). People want the lowest possible, but under certain number they only notice the different because their RTT being shown to them.
From reading Iv'e done, I think if I could achieve <80ms RTT for players which are farthest from the servers center - it will be perfect.

The question now is whether this solution is practical, and can it result in a network which cover the entire USA under one player pool while everyone will have <80ms RTT?
I know that to get a definite answer there need to be consider many factor, but what i'm asking for is an assumption based on logic, knowledge and experience.

Also, the solution seems logical and not so complicate to implement. This is even for me, without much of network knowledge(I couldn't articulate it like starbasecitadel, but we meant the same solutions).
The question is why didn't this solution been implement by big game companies?
Advertisement


You are assuming that the back-haul of the internet is suboptimal. Thatmay be true for some of the discount tier2/3 providers. If you look at top quality providers, I think you'll find that, in general, that's a system that works pretty well, and doing better yourself is both hard, costly, and not likely to improve the overall gaming experience a whole lot (we're talking small fractions of improvement here.)

[quote name='magerX' timestamp='1350776606' post='4992297']

What i'm referring to is application delivery network(ADN), which optimize dynamic(non cacheable) applications and content.


Why don't you call them up and ask how well their technology would work with Counter-Strike?

All "acceleration" I've seen in this space builds on specific knowledge about, typically, transaction-based, RPC-based, often HTTP-based application interactions. That's not a good match for the needs of a typical action game, like Counter-Strike.

It sounds a little bit like the question could then be phrased as "How can I build an ADN for Counter-Strike (and other action games)?"

Note that the large providers (Xbox Live!, etc) already do this to some extent -- when they auto-match players, they attempt to hook players together to achieve "best game experience" which may include lowering latency, and matching skill. Xbox Live! doesn't have its own back-haul, though, because building that kind of infrastructure is very expensive. Also, Xbox Live! uses player-hosted game servers, so it can dynamically treat each little "clique" of players as a network, while the centralized matchmaking servers make everyone potentially visible to everyone else.
The nice thing with that approach is that you just need to match players up by latency, and don't need to worry about any of the low-level hardware and costs of running networking infrastructure.
enum Bool { True, False, FileNotFound };
@hplus0603
Look at starbasecitadel post, he articulated better what I want to do(though the assumptions I wrote are still valid). The way you re-phrased the question I should have asked is probably appropriate as well.
More importantly, look at his solution. This is exactly what I meant when I wanted to leverage existing ADN servers/POPs to lower the latency under single player pool concept.
So I guess existing ADN out, and custom ADN in(physical routers, reverse proxy servers or any other solution - I will probably still need to leverage an existing network like Ubiquity).

I read about Live! architecture before, and as you also wrote, it uses player-hosted game server, and create "clique" of player as a network. Therefore, this solution is not good for me for because I want to create a network where everyone can play with everyone with low enough RTT(whether it will be USA as a 1 network, or braking it to couple of networks).

If you can give your opinion and thoughts about starbasecitadel solution it will be very nice, I think it's the best approach(and the same thing I meant to when I talked about using commercial ADN).


Thanks again.
I want to create a network where everyone can play with everyone with low enough RTT[/quote]

Let's back up a little bit.

Why do you think this can't be done today? What is the specific value proposition you want to provide that is significantly different from what exists?

Using Xbox Live! as an example:
Xbox Live! lets everyone match up to everyone else -- I can play MW3 with a party of friends all over the world if I want.
The experience is slightly more laggy if we're all dispersed, and end up on a server somewhere far from me, but I can still play those people.
If you use Xbox Live! style matchmaking, but make a rule that the measured ping has to be at most X milliseconds, then you are, in effect, creating a "geographically partitioned" virtual network, that provides the latency you want. The draw-back is that you have to exclude any friends who are located far away from you, measured by ping.

So, using the existing internet, and putting servers in a few data centers, you can group players into groups with known maximum ping. If you raise the allowed ping, you can play with everyone in the world. If you lower the ping, you can play your regional area.

Now, what part of this do you want to improve?

Do you want to provide better (faster) internet service to the actual players? To do that, you need to pull fiber to each player's home. (Google is starting to do this as an experiment, btw.)

Do you want to reduce the cross-continent part of the latency? You could potentially do this by leasing your own network, but the speed improvement would be some fraction of what you'd get on the commercial internet; I doubt any consumer would see enough difference to want to pay for it.


If we take a particular game as example, maybe it would help. I can play Counter-Strike today. I can browser servers in a large part of the world. I can choose servers based on how high/low the ping is.
What part of this solution do you want to change, and how, and why?
enum Bool { True, False, FileNotFound };
@hplus0603
First I will address the why(value proposition):
The network itself is the core which I plan to build several services on. These services does not exist today and for obvious reason I do not wish to share them on a public forum.
So, the network I want to build is a crucial mean in order to provide the services that I want. For the discussion it does not matter what these services are, it's enough to know that the purpose of the network is to allow online multiplayer gaming in large geographical areas under single player pool concept.

Live! network:
As I said, it is not suitable for my purposes because despite the fact that everyone can play with everyone all around the world, IN PRACTICE, if all of the players in the specific game want to have low enough RTT - they be playing with players which are close to them geographically.
So even though it allow wide geographical network, it is not practical solution for my problem because people will be matchmake with people who are close to them.
I'm looking for practical solution where in large geographical area - everyone can player with everyone and have low enough RTT.

What I want to do:
(Let's assume we are talking about Counter-Strike, because it's available and popular)
1. Define a network region. For our discussion we will say the entire USA is defined as one region. (I don't care about players outside the pre-defined network region).
2. Build a SINGLE game servers center in a central geographical location within this region(for our discussion - USA).
3. Create a "low latency gamers network" that will reduce the latency to <80ms RTT between players all across USA to my game servers center. (with that I will create a "single player pool" - i.e. everyone can play with everyone within the USA and have RTT of <80ms = there are no servers with low/high ping because everyone connect to the same servers center.)

The problem is how to create this "low latency gamers network" in order to achieve <80ms RTT between players all across USA and my SINGLE game servers center.
starbasecitadel suggested finding existing great network(like Ubiquity for example), and do one of two things:
1. Lease servers on each of their geographically distributed servers. Then, create reverse proxy servers on each of these nodes(using nginx for example). These are not the game servers, they are like routers/peering points that forward TCP/UDP traffic to the actual game servers center - through extremely fast internal long distance network(like Ubiquity for example).
2. Have the same setup as #1, but instead using reverse proxy servers, lease actual rack spaced filled with smart routers.
(*Look at starbasecitadel post for more in depth explanation)

I hope things are clearer now. If not - let me know and I will try to explain better.
If it is, i'm looking forward to hear your opinion and thoughts :-)

@hplus0603
First I will address the why(value proposition):
The network itself is the core which I plan to build several services on. These services does not exist today and for obvious reason I do not wish to share them on a public forum.


My experience, and that of the general start-up community, has been that ideas are a dime a dozen. Your problem is going to be finding the resources (people, money, connections, time) to deliver your plan. If some big company thought it was a good idea, they'd already be doing it, so the risk if "idea theft" is typically zero.

Note that this is true even for startups in Silicon Valley. For example: No venture capitalist will sign a non-disclosure agreement. And, truth be told, they probably already heard some other company pitch the same idea you have anyway. What they are looking for is the ability to execute, plus a potential large, lucrative, unserved market. When the right team comes along with the right idea and market opportunity, is when they invest.

So, the network I want to build is a crucial mean in order to provide the services that I want.
[/quote]

Why? How wouldn't it work on the regular internet with virtual geographic grouping based on ping?

Live! network:
As I said, it is not suitable for my purposes because despite the fact that everyone can play with everyone all around the world, IN PRACTICE, if all of the players in the specific game want to have low enough RTT - they be playing with players which are close to them geographically.
[/quote]

Right. The infrastructure of the ISP and the realities of transmitting data from point A to point B make it so. The only way to improve this is to know who all your customers are, run special wiring/fiber straight to those customers, to a regional center, and then to wherever your server center is, and do it better than the AT&Ts and Comcasts and Verizons of the world. And, even so, you're only going to be better by some small factor depending largely on internal network overheads for the current ISPs. It's not like the current internet is run by people who don't know what they're doing -- they're already interested in the best possible performance of their network.

So even though it allow wide geographical network, it is not practical solution for my problem because people will be matchmake with people who are close to them.[/quote]

You can build a software system like Xbox Live! that matchmakes based on whatever parameters you want. What you can't do, is make routers faster than they are, or information travel faster than the speed of light.

I'm looking for practical solution where in large geographical area - everyone can player with everyone and have low enough RTT.
[/quote]

Again -- what part of that RTT are you looking to improve on? Doing a traceroute from me to Google, about 2/3 of my time is spent winding its way through Comcast networks (actually going AROUND the bay about 1.2 times...) before they hand off to Google peering. This is pretty typical for a cost-driven end-user ISP. If you want to actually improve the gaming experience, the biggest cut is probably for the residential ISP, and you're going to have to wire each of your customers with a superior technology. Fiber is cheap, but digging holes in streets is pretty expensive.

3. Create a "low latency gamers network" that will reduce the latency to <80ms RTT between players all across USA to my game servers center. (with that I will create a "single player pool" - i.e. everyone can play with everyone within the USA and have RTT of <80ms = there are no servers with low/high ping because everyone connect to the same servers center.)
[/quote]

If users access your data center through residential ISPs, I don't think that's going to happen. Some ISPs are great, some days. Other ISPs are terrible, most days.

Ubiquity is just a hosted data center provider. And they even boast of using a brand of hardware which is not at the top of my list if I had to build a high-performance trouble-free data center. They list their actual ISP connections (Level 3, GTT, etc.) They will not get you any closer to the ISPs than any other hosting location. Typically, you'll want to co-locate in a facility with high connectivity, and own your own hardware. You want to get as close as possible to the main places where ISPs interconnect (PAIX, NYIIX, SIX, etc)

Sure, you can obtain some IP address prefix, multi-home it across many data centers, and BGP announce them in a bunch of different locations. This will make user packets enter your control sooner. Then what? If you build long-haul connectivity, you will lease capacity from the various backbone providers. They will route this on the same network as they route general internet traffic. They may establish virtual circuits for you, so it looks like a direct connection to you, but underneath, it's the same fiber, and the same routers. Which, by the way, is the same fiber and routers used for long-distance telephony in many cases.

Why would your hop, and your entry into the long-haul backbones, be any better than that of the customer's residential ISP? Chances are, it would just look like a little detour -- packets come from ISP, go to you, turn around, and go to backhaul, and then enter you again at your destination. If you enter Level 3 networks or Sprint networks or any other backhaul provider, then chances are pretty good that you'll do that on the same terms as Comcast, Verizon, or any other end-user ISP will.

It might be good if you could provide some data on why existing latencies are not good enough, and why 80 ms is the magic number to beat. It would also be interesting to see data about how many users already have this latency to, say, a data center in LA, or Texas, or Virginia, or London, or whatever, and then compare to how many more you think you can get by making whatever improvements you're trying to make.

My guess is that, if you break down the typical latency seen by the typical gamer, you're unlikely to be able to significantly improve their experience unless you can somehow make a newer, better, connection to their home, and then also do a better job of hauling those packets to your data center than the companies that have been doing it for 20 years. If you could show data that shows that there is a significant market for this, and you can do it at a price that works for that market, that would be an awesome, big, world-changing project!
enum Bool { True, False, FileNotFound };
@hplus0603
My key service is not something which in revolutionary in a sense where nobody thought about something like that before, also it's not about developing new technology "star".
My idea is about "compiling" number of existing services in a different way, and adding a "key service" which is not being provided yet. Providing this key service and "compiling" the other services in the way I think it should be, will cause the sum to be much more than its parts and be revolutionary.
Due to the way the industry developed in recent years, at this point - such service/s(and the way they will be "compiled") will answer the need of a lot of customers AND will provide an added value for game publishers.
The "key service" is on the edges of game publishers companies responsibility(or what they would want to deal with), which cause somewhat of a blind spot that resulted in this service not being provided yet.
I know NDA are not common these days, but you can agree with me that there is a different between pitching to VC, and exposing my ideas over a public forum. This is why I will not go into details about it here. Also, this is not relevant to this forum or to the problem i'm trying to solve in this thread.

The reason I need to group players based on regions(like battle.net) and not based on ping(like Live!) is due to the nature of the service(s) I will provide. It won't fit to my purposes if I will go with the Live! scheme.
Essentially what i'm trying to do(in the netwrok part, which we talk about in this thread) is to create a virtual geographic grouping for the largest possible pre-define geographical area. This is while providing low enough RTT for all the players in this pre-defined area under single player pool concept. Hopefully the entire USA, but if it's not possible - then grouping it for smaller networks(for example: 1.USA east 2. USA west) will be good enough.
The 80ms number is the number to beat because from my reading 80ms-100ms is the threshold for good game experience in FPS games(other game types can tolerate lower RTT, so it will obviously be good enough for them as well). It might change later on, but I had to come up with a number for the sake of the discussion, and in general, this number seems like a good number to beat.

I'm not sure if I fail to explain my solution proposal or if i'm missing the point you are trying to tell me. I will give it another try:
(*The solution i'm looking for is not about putting new cables in the ground or something like that. What i'm looking for it to utilize existing infrastructures, networks, hardware, software, etc in a different way)
(**please don't take my examples of companies or even technologies as "the way to go". I am trying to explain what I mean without much background in the field so I go by what I read and understand. Try to get to the root of what I mean and not the specific).
-My general assumption is that internet communication(we are talking inside the USA in our discussion) is not efficient, and go through a lot of hops from one point to another(for example). This is why we are not near optimal RTT in most cases from point A to point B.
-My second assumption is that there are private networks which has their private network centers in number of strategic locations all over the USA(near key exchange points where major backbone providers peer with each other for example) and have a peering agreement with major carriers. This approach allow them to have near optimal RTT between their centers, and possibly better connection from their center to the end-users(though i'm not sure about this part).

My idea is to utilize such network in order to lower the RTT between players all across USA and my ONE central game servers center. I'll do it by leasing servers(and using reverse proxy as explained before- again, this is an example) or by actually leasing rack spaces(and put my own hardware) in all of this private network centers. Additionally, in a central location in the private network(Chicago for example) I will place my game servers centers.
The games only running on this one game servers center. The other servers/rack spaces with my hardware are there to receive and forward TCP/UDP traffic between the players and the actual game servers center. These distributed servers/routers(peering points?) are there to leverage the private network near optimal RTT between their centers, for the purpose of faster communication for players who are far from my servers center.

Here is a picture of an existing network to illustrate my point:
http://oi49.tinypic.com/35kojs8.jpg
(*I edited Chicago center to be a blue dot)
All of the dots are POPs/centers in an existing private network: the blue dot is where I will put my game servers center, the red dots are the other routers/peering points which receive and forward TCP/UDP traffic between the players and my game servers center.
Since this internal private network has near optimal RTT between the centers, if you compare two situations:
1. A player from LA who connect directly to my Chicago game servers center
2. A player from LA who connect to my LA router/peer point and then the packets travels in near optimal RTT form my LA router/peer point to my game servers center
Obviously the overall RTT will be lower in the second case. And by this, I created a network which can provide low enough RTT for larger than normal geographical areas under single player pool concept.


Does it make sense now? Is it possible, realistic and smart? Am I missing a crucial part which will make it not a good idea(highly possible due to the lack of my technical knowledge. This idea is mainly logic based after few weeks of reading)?
Alternatively, do you have a better idea to create the kind of network that I want? Up until now we talked about why I need it this way, alternatives to what I need, and finding work-around's. If the network I descirbed is giving as what I need, do you have a creative idea on how to create such a network?
-My general assumption is that internet communication(we are talking inside the USA in our discussion) is not efficient, and go through a lot of hops from one point to another(for example). This is why we are not near optimal RTT in most cases from point A to point B.
-My second assumption is that there are private networks which has their private network centers in number of strategic locations all over the USA(near key exchange points where major backbone providers peer with each other for example) and have a peering agreement with major carriers. This approach allow them to have near optimal RTT between their centers, and possibly better connection from their center to the end-users(though i'm not sure about this part).
[/quote]

Right. What I'm saying is that I don't think those assumptions are valid. The long-haul providers are reasonably efficient -- if they're not, then their competitors will be, and they'll die in the market!

It's been my experience that the main inefficiencies and sources of variability in the internet come from the consumer ISPs. Thus, to fix the main cause of this variability, you need to compete with the consumer ISPs, providing a better connection, which means shovels in the ground.

my LA router/peer point and then the packets travels in near optimal RTT form my LA router/peer point to my game servers[/quote]

You are assuming that the long-haul between LA and Chicago that you can provide is better than the long-haul that the customer gets from the general Internet backbone. My instinct is that that won't be true -- you basically will just become another tier-3 long-haul provider, leasing capacity on the same fiber that the packets would travel on anyway. Unless you lay your own fiber -- which means shovels in the ground, and is unlikely to be all that much better than what's already in the ground.

Alternatively, do you have a better idea to create the kind of network that I want? [/quote]

How about you create all the service integration based on IP geolocation, without the private network, and see how well it works? If it's a system that needs "network effects" to succeed it may be hard to roll out on a trial basis, but if you can find a way to get a smaller roll-out and measure network actuals, you'll have a much better picture of how (and if so, where) to optimize the transmission.

enum Bool { True, False, FileNotFound };
http://aws.amazon.com/directconnect/

Amazon's AWS Direct Connect product appears to do something like what I was suggesting. That implies Amazon at least thinks that the long-haul internet connectivity issues on the public internet are significant enough for them to offer a major business product:

[quote name='amazon.com']
in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.
[/quote]

[quote name='amazon.com']
in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.

[/quote]

The costs you reduce are the Amazon network bandwidth charges for accessing the Amazon cloud -- instead, you pay for the same provisioning.

The latency they talk about is the access from customer premises to Amazon data centers. If you buy dedicated connections to their data centers from customer premises, it will side-step the "local ISP" problem. Which is reasonable for a company that has a facility close to a local fiber loop, but kind-of dies when you try to do it for each end-user customer in an area.

enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement