Jump to content

  • Log In with Google      Sign In   
  • Create Account

Large geographical area network


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
27 replies to this topic

#1 magerX   Members   -  Reputation: 117

Like
0Likes
Like

Posted 19 October 2012 - 04:46 PM

I would like to start a theoretical debate about the possibility of creating a large geographical area network for online multiplayer gaming. I'm talking about client-server architecture, under a "single player pool" concept, meaning everybody on the network can play with everyone with "low enough" RTT.
(*When I say theoretical, I mean that practical implementation of such network is not the issue of the discussion(coding, etc). The issue is whether it is possible and what's the best approach to do so.)

Assumptions:
1. Budget is not an issue. Despite that, the solution need to be realistic.
2. We will take Counter-Strike game as an example("AAA" FPS game which allow private dedicate game servers)
3. Large geographical area optimally means the entire USA. If that's too large area, USA east and USA west for example will be good enough. (USA region is an example, we can talk about Europe in the same way).
4. "Single player pool" means that each pre-defined geographical area has only ONE game server(or more accurately game server center) in a central geographical location within the area we are talking about(either the entire USA as one area, or two areas with ONE game servers center for each: 1. USA east 2. USA west). Players within the pre-defined area(s) connect only to the game servers center in their specific area.
5. Players outside a pre-defined geographical area(s) can not connect to the specific game servers center of the area.
6. All of the players within the pre-define area, should have "low enough" RTT. "Low enough" RTT probably means under 100ms overall(<80 will be perfect).

Is it possible to create something like that? What will be the best approach?
Please share any thought you have on the subject...

Sponsor:

#2 KnolanCross   Members   -  Reputation: 1317

Like
0Likes
Like

Posted 19 October 2012 - 05:15 PM

Being large doesn't really seem the problem. A huge empty world would be pretty easy to create.

Mostly the problems start as you fill a region with a lot of players (the number of messages needed to keep all the clients updated grows in a n^2 progression), or you have a lot of math to do to keep the simulation going (such as very detailed AI or physics).

I believe there isn't much of a magic solution to this, you can host the game in several zones, each being processed by a different server (or a different CPU). In this case the transaction areas are mostly made very unappeling to players on purpose, cause we don't want then wandering there or are not graphical (for intance, you enter a portal).

The other possible solution is to let the player process a part of the needed work or let them have a local version of the world while there is no one around. Here is a post that say a few things on this approach:

http://www.gamasutra.com/view/feature/173977/scaling_a_physics_engine_to_.php

Currently working on a scene editor for ORX (http://orx-project.org), using kivy (http://kivy.org).


#3 magerX   Members   -  Reputation: 117

Like
0Likes
Like

Posted 19 October 2012 - 06:11 PM

@KnolaCross
Probably an assumption I forgot to add(as you addressed in your first sentence) is that we are also talking about large scale traffic wise. Let's assume few thousands to few ten thousands players at any given time.

Also, I think I didn't explain myself properly:
The games i'm talking about are major FPS/RTS/MOBA games(by Valve, Activision Blizzard, etc), not MMOs, and not new games I create(i.e.i'm talking about existing games which use client-server architecture).
So what i'm asking is not about in-game mechanism to create the largest possible network, but instead, about creating the largest possible network(geographically) for existing games under client-server architecture and "single player pool" concept.
I took Counter-Strike as an example for such a game because Valve allow you to create a private dedicate server.

I hope it clarifies what I meant.

#4 hplus0603   Moderators   -  Reputation: 5354

Like
0Likes
Like

Posted 19 October 2012 - 11:10 PM

At my previous company (Forterra Systems) we sold a system that did a single-instance whole-earth sized with practically unlimited number of players across the world. It could compensate for 800 ms latency going each way (although of course the game experience is worse when you have that kind of latency.)

The main limitation was one of player density. You had to provision density beforehand. The system would do about 100 players per "cell" and the minimum size of a "cell" was 50x50 meters. Note that moving between cells would be fully seamless, and you could see/interact with/shoot a player in a neighboring cell, so with high density, the server had to understand about a lot more players than the 100 within its own cell.

You may be able to still license this; the company (Forterra Systems) was bought by SAIC, a defense contractor, and is licensed out as their OLIVE virtual world technology. Although the current sales brochure seems to focus more on the fairly innovative full-system record-and-playback technology -- an operator could stop the game, rewind to some point in time, play it forward with voice-over for everyone to hear and camera control, and then optionally re-start the simulation from some new point. Pretty slick, and quite useful for training, planning, and similar situations.

If I were to build a new version of this, I'm not sure I'd go through the pain to actually support full-Earth size worlds. That's way too much terrain for any one player to ever visit and play in. However, I'd build using an approach that can "stack" multiple servers on top of the same area, to allow close to infinite player densities, because it turns out, players all want to be where all the other players are...

enum Bool { True, False, FileNotFound };

#5 magerX   Members   -  Reputation: 117

Like
0Likes
Like

Posted 20 October 2012 - 07:52 AM

@hplus0603
I'm looking for ways to actually REDUCE the latency in the notwork. I'm not looking for mechanism to compensate for existing high latency.
Therefore, I'm not talking about in-game mechanisms(or matchmaking), but instead, i'm talking about actual network related techniques and components to do so.
This is because I want to use this network for existing games and not new ones(for example, counter strike). As I said, I'm also not talking about MMOs.

So basically i'm talking about network infrastructure that will minimize the latency in large geographical areas(as explained at assumption #3), under single player pool concept(as explained in assumption #4,#5.#6).
With this infrastructure, your could use any existing game in order to lower the latency between the clients and the servers center, and allow online multiplayer gaming in large geographical areas under single player pool concept.
Obviously there will be a need to make some customization for each specific game, but hopefully these will be minimal ones. This is because what i'm talking about here is infrastructure for low latency communication between clients and the game servers center, and not any in-game mechanism.

The simplest example I can think of to illustrate what I'm talking about is this:
a. Creating big dedicated game servers center in central location within the USA(geographically speaking). The servers in this center run counter-strike games for example.
b. Now what i'm asking about is creating a network infrastructure that will allow players from all over USA to connect to this servers center with RTT which is lower than 100ms. This way, everyone can play with everyone in a fair and enjoyable way.


I hope it make sense...

#6 hplus0603   Moderators   -  Reputation: 5354

Like
0Likes
Like

Posted 20 October 2012 - 12:20 PM

I see. Application-specific networks are built on a regular basis. For example, Amazon, Google, and other large companies typically lease backbone fiber capacity to directly connect their data centers.
The problem is end-user access. Your application-specific network will still need to get to the players. That means going through "the last mile," where the player ISP hooks up to some network peering exchange where it can talk to your network.
If you buy capacity to all of the major ISPs in all of the major metropolitan areas, you could cover perhaps 80% of all US residential customers with a "direct to ISP" topology. At pretty significant expense.

The question is: how much better would this be than just buying ISP bandwidth from 3-5 major carriers to go into your data center, and let the carriers deal with the back-haul? Maybe you can shave a few milliseconds off the routing, maybe even a dozen milliseconds. Whether that is worth it depends on market conditions. Long-haul "speed of light" will still matter, as will the dozen hops the customer ISP will send traffic through before getting onto your network.

enum Bool { True, False, FileNotFound };

#7 magerX   Members   -  Reputation: 117

Like
0Likes
Like

Posted 20 October 2012 - 02:51 PM

@hplus0603
Basically I had similar idea in mind.
My idea was to create a big dedicated servers center in central geographical location within the pre-defined area(as explained), and then utilize existing application delivery network(or other kind of private network) from companies like Akamai or CDN network.
From my readings, I couldn't find usage of such existing network for online multiplayer games, but since ADN optimize RTT for other bi-directional applications, I thought it might be possible.

Such companies networks do what you were talking about, right(lease backbone fiber capacity, buy ISP bandwidth, etc)?

If so, the question now is whether it will be possible to build a network large enough, which could fulfill all the assumptions/requirements I talked about in the first post.
I know it depends on a lot of factors, but from your knowledge and experience - do you think it should be possible to create such a network(large area, single player pool, <100ms RTT, etc)?

Edited by magerX, 20 October 2012 - 02:52 PM.


#8 hplus0603   Moderators   -  Reputation: 5354

Like
0Likes
Like

Posted 20 October 2012 - 03:22 PM

utilize existing application delivery network(or other kind of private network) from companies like Akamai


That's not a good model. Akamai optimizes bulk throughput by putting content caches near the consumer ISPs. There are two differences with games:

1) Games are all about low latency for frequently changing data; Akamai is all about high throughput for static data.

2) There is no interaction between different Akamai users, so each cache is "stateless."

If you put a set of game servers where Akamai has each of its cache servers, and then matched up players to the nearest game server, you would get low latencies, but pretty small pools of players (one pool for NYC, one pool for Boston, one pool for Atlanta, ...) In fact, you might even end up with different pools for players with different ISPs in the same region, because that's how you deal with the ISP interconnection latency issue.

enum Bool { True, False, FileNotFound };

#9 magerX   Members   -  Reputation: 117

Like
0Likes
Like

Posted 20 October 2012 - 05:43 PM

@hplus0603
We are talking about two different network types/network solutions.

What you are talking about is classic content delivery network(CDN), which optimize static(one direction) content delivery like video(straming, VOD, etc).
With this method, I will end up with small player pools like you mentioned - which is not what I want.

What i'm referring to is application delivery network(ADN), which optimize dynamic(non cacheable) applications and content. Here are papers from two top comapnies which explain about this type of network/solution:
http://www.dcs.bbk.a...CC_Brochure.pdf
http://www.edgecast....n-datasheet.pdf
http://www.ciosummits.com/media/pdf/solution_spotlight/CDNetworks_Making_applications_fly.pdf

As I said earilier, I haven't seen usage of such networks for game purpuses. Despite that, since ADN optimize RTT for other dynemic applications, I thought it might be possible(with some adjustments pethps).
I know these network reduce a lot of RTT due to optimization of TCP, and that most of the games I am talking about using UDP. My idea is to leverage the other part of these networks: the network POPs, server distubition, communication between them, etc.

Giving this is what I meant when I talked about using companies like Akami/CDNnetworks/Edgecast, if you can, please re-read my last message and give your opinion.
Also, any other thoughts you have about the matter will be nice.


Thanks for taking the time to respond :-)

Edited by magerX, 20 October 2012 - 05:59 PM.


#10 starbasecitadel   Members   -  Reputation: 699

Like
0Likes
Like

Posted 20 October 2012 - 05:48 PM

It sounds like the problem the OP is trying to solve is to essentially reduce network latency for the vast majority of players, so that servers located anywhere on this "low-latency gamers network" are accessible by all players with < 100ms RTT. Most servers ideally would be hosted somewhere central such as Chicago to reduce the maximum distance travelled.

I could be wrong but I had a similar though a few years ago too. First of all, you need a great network. This is one I previously researched a little: https://www.ubiquity...om/data-centers .

Then the idea is this: you lease servers on some or all of their geographically distributed data centers. Then, you create thin reverse proxy servers on each of these nodes (perhaps using nginx for example). These are not the game servers, they are really almost like routers ( peering points?) that forward TCP/UDP traffic to the actual game servers -- through say Ubiquity's extremely fast internal long distance network.


What does that cost you? You are paying for hosting and intra-company long-distance network traffic. You will pay roughy 1ms in extra latency for the nginx reverse proxy peering servers plus have introduced another point of failure (though it is extremely stable in my experience and can be redundantly load-balanced easily).

What does it gain you? Instead of say taking 10-30 hops you will knock that number down and be on an extremely fast, optimized network for most of the destination. You still don't eliminate the "last mile" from the closest nginx endpoint to the gamer's house, but that part doesn't get worse either. Overall, network latency should be reduced sufficiently in this setup to drastically increase the number of low-ping game servers for most games (again where the requirement is say 100ms RTT or less). In terms of financial cost I don't have an estimate for this configuration.

Where this breaks down is for games with even lower latency requirements -- I'm not sure where that point is but probably around 20ms-50ms RTT. For those kinds of games you are limited by speed of light more than anything so physical proximity is still an issue.


edit:
A few corrections. While this would eliminate many hops, I was incorrect to state the "last mile" is the only suboptimal network path. That is only true for players who live a mile away from one of the data centers. For everyone else, you still have to travel on the standard, suboptimal networks for the portion that connects you to the lowest latency data center. In this example, if you are in Miami, then you still have to travel suboptimal networks all the way to Atlanta. Only then are you on the optimized high speed network that will take your traffic quickly to and from the Chicago game servers.

How to improve that "last 100 mile" Posted Image portion? I'm not really sure.. at first glance that would be an enormous task. With enough funding, I guess you could lease server space at 1,000 mini-data centers around the country and purchase leased lines from those mini-data centers to the primary data centers (eg Atlanta). You would position these mini-data centers at or very close to the local hubs of the major cable companies and DSL companies. How much this would cost and how much latency would go down from doing this portion of it I'm not sure. It probably wouldn't be worth the cost in most areas but I could be wrong.

My next comment is that while I suggested using nginx reverse proxy Linux servers (something that I've set up myself a few times and is pretty simple), this really is getting above my networking pay-grade Posted Image It is may be possible to do the same thing, for cheaper price, by leasing full rack spaces filled with smart routers instead of servers acting as routers.

It seems inevitable that this kind of concept will eventually take off in some form, perhaps by a Google initiative. In the meantime there are services like http://www.smoothping.com/ , that are doing some subset of what I described.

#11 magerX   Members   -  Reputation: 117

Like
0Likes
Like

Posted 20 October 2012 - 06:48 PM

@starbasecitadel
You got what problem I'm trying to solve, and also the general solution I was thinking about(my last 2 reply's). Thanks for making it more clear(as I said, I do not have much network knowledge).

I don't speed of light should be an issue. San Francisco to NY theoretical limit RTT is 40ms(calculated based on: distance, speed of light and data travel through fiber).
If we are talking about game servers center in central location, and client-server architecture, the data only need to travel half of that distance. And we are talking about coverage of the entire USA under one player pool(i.e. one game servers center).

I don't think anyone need 20ms-50ms RTT in modern game(especially with all the modern in-game lag compensations mechanisms). People want the lowest possible, but under certain number they only notice the different because their RTT being shown to them.
From reading Iv'e done, I think if I could achieve <80ms RTT for players which are farthest from the servers center - it will be perfect.

The question now is whether this solution is practical, and can it result in a network which cover the entire USA under one player pool while everyone will have <80ms RTT?
I know that to get a definite answer there need to be consider many factor, but what i'm asking for is an assumption based on logic, knowledge and experience.

Also, the solution seems logical and not so complicate to implement. This is even for me, without much of network knowledge(I couldn't articulate it like starbasecitadel, but we meant the same solutions).
The question is why didn't this solution been implement by big game companies?

#12 hplus0603   Moderators   -  Reputation: 5354

Like
0Likes
Like

Posted 21 October 2012 - 01:48 PM

For everyone else, you still have to travel on the standard, suboptimal networks


You are assuming that the back-haul of the internet is suboptimal. Thatmay be true for some of the discount tier2/3 providers. If you look at top quality providers, I think you'll find that, in general, that's a system that works pretty well, and doing better yourself is both hard, costly, and not likely to improve the overall gaming experience a whole lot (we're talking small fractions of improvement here.)

What i'm referring to is application delivery network(ADN), which optimize dynamic(non cacheable) applications and content.


Why don't you call them up and ask how well their technology would work with Counter-Strike?

All "acceleration" I've seen in this space builds on specific knowledge about, typically, transaction-based, RPC-based, often HTTP-based application interactions. That's not a good match for the needs of a typical action game, like Counter-Strike.

It sounds a little bit like the question could then be phrased as "How can I build an ADN for Counter-Strike (and other action games)?"

Note that the large providers (Xbox Live!, etc) already do this to some extent -- when they auto-match players, they attempt to hook players together to achieve "best game experience" which may include lowering latency, and matching skill. Xbox Live! doesn't have its own back-haul, though, because building that kind of infrastructure is very expensive. Also, Xbox Live! uses player-hosted game servers, so it can dynamically treat each little "clique" of players as a network, while the centralized matchmaking servers make everyone potentially visible to everyone else.
The nice thing with that approach is that you just need to match players up by latency, and don't need to worry about any of the low-level hardware and costs of running networking infrastructure.

Edited by hplus0603, 21 October 2012 - 01:54 PM.

enum Bool { True, False, FileNotFound };

#13 magerX   Members   -  Reputation: 117

Like
0Likes
Like

Posted 21 October 2012 - 02:39 PM

@hplus0603
Look at starbasecitadel post, he articulated better what I want to do(though the assumptions I wrote are still valid). The way you re-phrased the question I should have asked is probably appropriate as well.
More importantly, look at his solution. This is exactly what I meant when I wanted to leverage existing ADN servers/POPs to lower the latency under single player pool concept.
So I guess existing ADN out, and custom ADN in(physical routers, reverse proxy servers or any other solution - I will probably still need to leverage an existing network like Ubiquity).

I read about Live! architecture before, and as you also wrote, it uses player-hosted game server, and create "clique" of player as a network. Therefore, this solution is not good for me for because I want to create a network where everyone can play with everyone with low enough RTT(whether it will be USA as a 1 network, or braking it to couple of networks).

If you can give your opinion and thoughts about starbasecitadel solution it will be very nice, I think it's the best approach(and the same thing I meant to when I talked about using commercial ADN).


Thanks again.

#14 hplus0603   Moderators   -  Reputation: 5354

Like
0Likes
Like

Posted 21 October 2012 - 05:02 PM

I want to create a network where everyone can play with everyone with low enough RTT


Let's back up a little bit.

Why do you think this can't be done today? What is the specific value proposition you want to provide that is significantly different from what exists?

Using Xbox Live! as an example:
Xbox Live! lets everyone match up to everyone else -- I can play MW3 with a party of friends all over the world if I want.
The experience is slightly more laggy if we're all dispersed, and end up on a server somewhere far from me, but I can still play those people.
If you use Xbox Live! style matchmaking, but make a rule that the measured ping has to be at most X milliseconds, then you are, in effect, creating a "geographically partitioned" virtual network, that provides the latency you want. The draw-back is that you have to exclude any friends who are located far away from you, measured by ping.

So, using the existing internet, and putting servers in a few data centers, you can group players into groups with known maximum ping. If you raise the allowed ping, you can play with everyone in the world. If you lower the ping, you can play your regional area.

Now, what part of this do you want to improve?

Do you want to provide better (faster) internet service to the actual players? To do that, you need to pull fiber to each player's home. (Google is starting to do this as an experiment, btw.)

Do you want to reduce the cross-continent part of the latency? You could potentially do this by leasing your own network, but the speed improvement would be some fraction of what you'd get on the commercial internet; I doubt any consumer would see enough difference to want to pay for it.


If we take a particular game as example, maybe it would help. I can play Counter-Strike today. I can browser servers in a large part of the world. I can choose servers based on how high/low the ping is.
What part of this solution do you want to change, and how, and why?

Edited by hplus0603, 21 October 2012 - 05:04 PM.

enum Bool { True, False, FileNotFound };

#15 magerX   Members   -  Reputation: 117

Like
0Likes
Like

Posted 21 October 2012 - 06:11 PM

@hplus0603
First I will address the why(value proposition):
The network itself is the core which I plan to build several services on. These services does not exist today and for obvious reason I do not wish to share them on a public forum.
So, the network I want to build is a crucial mean in order to provide the services that I want. For the discussion it does not matter what these services are, it's enough to know that the purpose of the network is to allow online multiplayer gaming in large geographical areas under single player pool concept.

Live! network:
As I said, it is not suitable for my purposes because despite the fact that everyone can play with everyone all around the world, IN PRACTICE, if all of the players in the specific game want to have low enough RTT - they be playing with players which are close to them geographically.
So even though it allow wide geographical network, it is not practical solution for my problem because people will be matchmake with people who are close to them.
I'm looking for practical solution where in large geographical area - everyone can player with everyone and have low enough RTT.

What I want to do:
(Let's assume we are talking about Counter-Strike, because it's available and popular)
1. Define a network region. For our discussion we will say the entire USA is defined as one region. (I don't care about players outside the pre-defined network region).
2. Build a SINGLE game servers center in a central geographical location within this region(for our discussion - USA).
3. Create a "low latency gamers network" that will reduce the latency to <80ms RTT between players all across USA to my game servers center. (with that I will create a "single player pool" - i.e. everyone can play with everyone within the USA and have RTT of <80ms = there are no servers with low/high ping because everyone connect to the same servers center.)

The problem is how to create this "low latency gamers network" in order to achieve <80ms RTT between players all across USA and my SINGLE game servers center.
starbasecitadel suggested finding existing great network(like Ubiquity for example), and do one of two things:
1. Lease servers on each of their geographically distributed servers. Then, create reverse proxy servers on each of these nodes(using nginx for example). These are not the game servers, they are like routers/peering points that forward TCP/UDP traffic to the actual game servers center - through extremely fast internal long distance network(like Ubiquity for example).
2. Have the same setup as #1, but instead using reverse proxy servers, lease actual rack spaced filled with smart routers.
(*Look at starbasecitadel post for more in depth explanation)

I hope things are clearer now. If not - let me know and I will try to explain better.
If it is, i'm looking forward to hear your opinion and thoughts :-)

Edited by magerX, 21 October 2012 - 06:15 PM.


#16 hplus0603   Moderators   -  Reputation: 5354

Like
0Likes
Like

Posted 21 October 2012 - 10:15 PM

@hplus0603
First I will address the why(value proposition):
The network itself is the core which I plan to build several services on. These services does not exist today and for obvious reason I do not wish to share them on a public forum.


My experience, and that of the general start-up community, has been that ideas are a dime a dozen. Your problem is going to be finding the resources (people, money, connections, time) to deliver your plan. If some big company thought it was a good idea, they'd already be doing it, so the risk if "idea theft" is typically zero.

Note that this is true even for startups in Silicon Valley. For example: No venture capitalist will sign a non-disclosure agreement. And, truth be told, they probably already heard some other company pitch the same idea you have anyway. What they are looking for is the ability to execute, plus a potential large, lucrative, unserved market. When the right team comes along with the right idea and market opportunity, is when they invest.

So, the network I want to build is a crucial mean in order to provide the services that I want.


Why? How wouldn't it work on the regular internet with virtual geographic grouping based on ping?

Live! network:
As I said, it is not suitable for my purposes because despite the fact that everyone can play with everyone all around the world, IN PRACTICE, if all of the players in the specific game want to have low enough RTT - they be playing with players which are close to them geographically.


Right. The infrastructure of the ISP and the realities of transmitting data from point A to point B make it so. The only way to improve this is to know who all your customers are, run special wiring/fiber straight to those customers, to a regional center, and then to wherever your server center is, and do it better than the AT&Ts and Comcasts and Verizons of the world. And, even so, you're only going to be better by some small factor depending largely on internal network overheads for the current ISPs. It's not like the current internet is run by people who don't know what they're doing -- they're already interested in the best possible performance of their network.

So even though it allow wide geographical network, it is not practical solution for my problem because people will be matchmake with people who are close to them.


You can build a software system like Xbox Live! that matchmakes based on whatever parameters you want. What you can't do, is make routers faster than they are, or information travel faster than the speed of light.

I'm looking for practical solution where in large geographical area - everyone can player with everyone and have low enough RTT.


Again -- what part of that RTT are you looking to improve on? Doing a traceroute from me to Google, about 2/3 of my time is spent winding its way through Comcast networks (actually going AROUND the bay about 1.2 times...) before they hand off to Google peering. This is pretty typical for a cost-driven end-user ISP. If you want to actually improve the gaming experience, the biggest cut is probably for the residential ISP, and you're going to have to wire each of your customers with a superior technology. Fiber is cheap, but digging holes in streets is pretty expensive.

3. Create a "low latency gamers network" that will reduce the latency to <80ms RTT between players all across USA to my game servers center. (with that I will create a "single player pool" - i.e. everyone can play with everyone within the USA and have RTT of <80ms = there are no servers with low/high ping because everyone connect to the same servers center.)


If users access your data center through residential ISPs, I don't think that's going to happen. Some ISPs are great, some days. Other ISPs are terrible, most days.

Ubiquity is just a hosted data center provider. And they even boast of using a brand of hardware which is not at the top of my list if I had to build a high-performance trouble-free data center. They list their actual ISP connections (Level 3, GTT, etc.) They will not get you any closer to the ISPs than any other hosting location. Typically, you'll want to co-locate in a facility with high connectivity, and own your own hardware. You want to get as close as possible to the main places where ISPs interconnect (PAIX, NYIIX, SIX, etc)

Sure, you can obtain some IP address prefix, multi-home it across many data centers, and BGP announce them in a bunch of different locations. This will make user packets enter your control sooner. Then what? If you build long-haul connectivity, you will lease capacity from the various backbone providers. They will route this on the same network as they route general internet traffic. They may establish virtual circuits for you, so it looks like a direct connection to you, but underneath, it's the same fiber, and the same routers. Which, by the way, is the same fiber and routers used for long-distance telephony in many cases.

Why would your hop, and your entry into the long-haul backbones, be any better than that of the customer's residential ISP? Chances are, it would just look like a little detour -- packets come from ISP, go to you, turn around, and go to backhaul, and then enter you again at your destination. If you enter Level 3 networks or Sprint networks or any other backhaul provider, then chances are pretty good that you'll do that on the same terms as Comcast, Verizon, or any other end-user ISP will.

It might be good if you could provide some data on why existing latencies are not good enough, and why 80 ms is the magic number to beat. It would also be interesting to see data about how many users already have this latency to, say, a data center in LA, or Texas, or Virginia, or London, or whatever, and then compare to how many more you think you can get by making whatever improvements you're trying to make.

My guess is that, if you break down the typical latency seen by the typical gamer, you're unlikely to be able to significantly improve their experience unless you can somehow make a newer, better, connection to their home, and then also do a better job of hauling those packets to your data center than the companies that have been doing it for 20 years. If you could show data that shows that there is a significant market for this, and you can do it at a price that works for that market, that would be an awesome, big, world-changing project!

Edited by hplus0603, 21 October 2012 - 10:15 PM.

enum Bool { True, False, FileNotFound };

#17 magerX   Members   -  Reputation: 117

Like
0Likes
Like

Posted 22 October 2012 - 07:48 AM

@hplus0603
My key service is not something which in revolutionary in a sense where nobody thought about something like that before, also it's not about developing new technology "star".
My idea is about "compiling" number of existing services in a different way, and adding a "key service" which is not being provided yet. Providing this key service and "compiling" the other services in the way I think it should be, will cause the sum to be much more than its parts and be revolutionary.
Due to the way the industry developed in recent years, at this point - such service/s(and the way they will be "compiled") will answer the need of a lot of customers AND will provide an added value for game publishers.
The "key service" is on the edges of game publishers companies responsibility(or what they would want to deal with), which cause somewhat of a blind spot that resulted in this service not being provided yet.
I know NDA are not common these days, but you can agree with me that there is a different between pitching to VC, and exposing my ideas over a public forum. This is why I will not go into details about it here. Also, this is not relevant to this forum or to the problem i'm trying to solve in this thread.

The reason I need to group players based on regions(like battle.net) and not based on ping(like Live!) is due to the nature of the service(s) I will provide. It won't fit to my purposes if I will go with the Live! scheme.
Essentially what i'm trying to do(in the netwrok part, which we talk about in this thread) is to create a virtual geographic grouping for the largest possible pre-define geographical area. This is while providing low enough RTT for all the players in this pre-defined area under single player pool concept. Hopefully the entire USA, but if it's not possible - then grouping it for smaller networks(for example: 1.USA east 2. USA west) will be good enough.
The 80ms number is the number to beat because from my reading 80ms-100ms is the threshold for good game experience in FPS games(other game types can tolerate lower RTT, so it will obviously be good enough for them as well). It might change later on, but I had to come up with a number for the sake of the discussion, and in general, this number seems like a good number to beat.

I'm not sure if I fail to explain my solution proposal or if i'm missing the point you are trying to tell me. I will give it another try:
(*The solution i'm looking for is not about putting new cables in the ground or something like that. What i'm looking for it to utilize existing infrastructures, networks, hardware, software, etc in a different way)
(**please don't take my examples of companies or even technologies as "the way to go". I am trying to explain what I mean without much background in the field so I go by what I read and understand. Try to get to the root of what I mean and not the specific).
-My general assumption is that internet communication(we are talking inside the USA in our discussion) is not efficient, and go through a lot of hops from one point to another(for example). This is why we are not near optimal RTT in most cases from point A to point B.
-My second assumption is that there are private networks which has their private network centers in number of strategic locations all over the USA(near key exchange points where major backbone providers peer with each other for example) and have a peering agreement with major carriers. This approach allow them to have near optimal RTT between their centers, and possibly better connection from their center to the end-users(though i'm not sure about this part).

My idea is to utilize such network in order to lower the RTT between players all across USA and my ONE central game servers center. I'll do it by leasing servers(and using reverse proxy as explained before- again, this is an example) or by actually leasing rack spaces(and put my own hardware) in all of this private network centers. Additionally, in a central location in the private network(Chicago for example) I will place my game servers centers.
The games only running on this one game servers center. The other servers/rack spaces with my hardware are there to receive and forward TCP/UDP traffic between the players and the actual game servers center. These distributed servers/routers(peering points?) are there to leverage the private network near optimal RTT between their centers, for the purpose of faster communication for players who are far from my servers center.

Here is a picture of an existing network to illustrate my point:
http://oi49.tinypic.com/35kojs8.jpg
(*I edited Chicago center to be a blue dot)
All of the dots are POPs/centers in an existing private network: the blue dot is where I will put my game servers center, the red dots are the other routers/peering points which receive and forward TCP/UDP traffic between the players and my game servers center.
Since this internal private network has near optimal RTT between the centers, if you compare two situations:
1. A player from LA who connect directly to my Chicago game servers center
2. A player from LA who connect to my LA router/peer point and then the packets travels in near optimal RTT form my LA router/peer point to my game servers center
Obviously the overall RTT will be lower in the second case. And by this, I created a network which can provide low enough RTT for larger than normal geographical areas under single player pool concept.


Does it make sense now? Is it possible, realistic and smart? Am I missing a crucial part which will make it not a good idea(highly possible due to the lack of my technical knowledge. This idea is mainly logic based after few weeks of reading)?
Alternatively, do you have a better idea to create the kind of network that I want? Up until now we talked about why I need it this way, alternatives to what I need, and finding work-around's. If the network I descirbed is giving as what I need, do you have a creative idea on how to create such a network?

#18 hplus0603   Moderators   -  Reputation: 5354

Like
0Likes
Like

Posted 22 October 2012 - 10:32 AM

-My general assumption is that internet communication(we are talking inside the USA in our discussion) is not efficient, and go through a lot of hops from one point to another(for example). This is why we are not near optimal RTT in most cases from point A to point B.
-My second assumption is that there are private networks which has their private network centers in number of strategic locations all over the USA(near key exchange points where major backbone providers peer with each other for example) and have a peering agreement with major carriers. This approach allow them to have near optimal RTT between their centers, and possibly better connection from their center to the end-users(though i'm not sure about this part).


Right. What I'm saying is that I don't think those assumptions are valid. The long-haul providers are reasonably efficient -- if they're not, then their competitors will be, and they'll die in the market!

It's been my experience that the main inefficiencies and sources of variability in the internet come from the consumer ISPs. Thus, to fix the main cause of this variability, you need to compete with the consumer ISPs, providing a better connection, which means shovels in the ground.

my LA router/peer point and then the packets travels in near optimal RTT form my LA router/peer point to my game servers


You are assuming that the long-haul between LA and Chicago that you can provide is better than the long-haul that the customer gets from the general Internet backbone. My instinct is that that won't be true -- you basically will just become another tier-3 long-haul provider, leasing capacity on the same fiber that the packets would travel on anyway. Unless you lay your own fiber -- which means shovels in the ground, and is unlikely to be all that much better than what's already in the ground.

Alternatively, do you have a better idea to create the kind of network that I want?


How about you create all the service integration based on IP geolocation, without the private network, and see how well it works? If it's a system that needs "network effects" to succeed it may be hard to roll out on a trial basis, but if you can find a way to get a smaller roll-out and measure network actuals, you'll have a much better picture of how (and if so, where) to optimize the transmission.


enum Bool { True, False, FileNotFound };

#19 starbasecitadel   Members   -  Reputation: 699

Like
0Likes
Like

Posted 23 October 2012 - 08:03 AM

http://aws.amazon.com/directconnect/

Amazon's AWS Direct Connect product appears to do something like what I was suggesting. That implies Amazon at least thinks that the long-haul internet connectivity issues on the public internet are significant enough for them to offer a major business product:

in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.



#20 hplus0603   Moderators   -  Reputation: 5354

Like
0Likes
Like

Posted 23 October 2012 - 11:08 AM


in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.


The costs you reduce are the Amazon network bandwidth charges for accessing the Amazon cloud -- instead, you pay for the same provisioning.

The latency they talk about is the access from customer premises to Amazon data centers. If you buy dedicated connections to their data centers from customer premises, it will side-step the "local ISP" problem. Which is reasonable for a company that has a facility close to a local fiber loop, but kind-of dies when you try to do it for each end-user customer in an area.


Edited by hplus0603, 23 October 2012 - 11:11 AM.

enum Bool { True, False, FileNotFound };




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS