Large geographical area network

Started by
26 comments, last by evillive2 11 years, 5 months ago
I did more reading and some research for relevant networks and I came up with "SoftLayer".
They only offer dedicated servers, while I will probably look for colocation if I will go with this method - so it's probably not the network I will choose(unless I will sign some partnership deal with them). Despite that, their network is perfect to demonstrate what we mean(me and starbasecitadel). This is mainly because on top of the explanations about their network and services, they publish their network latency publicly. This can take our discussion to a more practical, real world level.
(*i'm assuming SoftLayer is not the fastest network in the world, and that there are network which are at least as good as they are(latency wise) but also offer colocation. So I will talk like I can have my own hardware in the network POPs, even though in this specific company I generally can't)

Here is their network distribution(and specific specs):
http://www.softlayer.com/about/network
Here is their real time pop-to-pop latency:
http://lg.softlayer.com/
Also, here is a blog post they wrote about their general "policy" of 40ms between end-user and their POPs:
http://blog.softlaye...de-web-is-flat/

We are talking about USA network region only.
1. With their server distribution in the USA, most of the end users in any location within the USA will have lower than 40ms latency between them and the closest POP.
2. Under normal circumstances, Dallas POP have latency which is lower than 59ms to any other POP in the network(on most cases this is much lower).

Based on the above, this is what I will do:
1. Have my game servers center in the Dallas POP (most central place, latency based, according to the SoftLayer IP Backbone "Looking Glass")
2. In all of the other POPs, I will have routers(and other hardware) that will receive and forward TCP/UDP traffic from end users to my game data center in Dallas.

Based on the numbers that SoftLayer publish, I should have a network which has <99ms latency between most of the end-users within the US to my Dallas game servers center.
I have previously talked about 80ms as the "magic number", but after reading relevant research I believe that <100 is good enough(even for FPS games).

Of course there are MANY practical things that need to be taken care off, but i'm not a network professional.
I'm looking for a solution which is theoretically possible, and smart, in order to create network with the characters that I need. After I will find such a theoretical solution, I will consult(and then hire) network professionals in order to work on it in a practical manner.

What do you think?
Advertisement
If there's no minimum term, then you can run tests for a few thousand dollars. I think you should measure the difference between going to SoftLayer and then forwarding to Dallas, versus going directly to Dallas, for each player, to see whether the extra POP hop is worth it or not.

Also, note that server rental (in any data center) typically has significant up-charges for upgrades you really will want -- like SSDs and RAM. If you want to build a real business around this, you should plan on having physical access and use co-located hardware for the majority of your servers (or plan on paying a lot more than necessary.)

Finally, if your application is latency sensitive, you should not use any kind of hosting that uses a hypervisor or any other kind of virtualization. The "Dedicated" hardware from SoftLayer (or any other such hardware provider) is the right kind to solve this problem.
enum Bool { True, False, FileNotFound };
As I said, I gave SoftLayer as an example because they had great explanation about their services(inc hardware, latency, etc) which made it possible to give the discussion more of a real life expectations.
I will look for a company/network which is at least as good as SoftLayer but allow colocation. If you know any, let me know.

I look on this with more of a scientific mindset - first I defined the problem, now i'm looking for good hypothesis and defining my outcome expectations. The next step will be testing(before full deployment).
From what I read, you have knowledge and experience in the field - so it will be helpful to hear your outcome expectations(and maybe adjustment to my hypothesis).
Say what you think, it's just hypothesizing and guessing expected outcomes - you don't have to be 100% right. Your opinion is obviously more educated than mine, which is why I opened this discussion smile.png

Thanks a lot, I really appreciate your comments.


PS
I see there are a lot of views on this thread, so anyone else who have an opinion about the matter is welcome to join the discussion.
I will look for a company/network which is at least as good as SoftLayer but allow colocation. If you know any, let me know.[/quote]

How is that different from any co-location provider that already exists? Every co-location provider has space, cooling, redundant power, and a number of various-quality back-bone/ISP providers available on site. They won't be any different from what SoftLayer does. Just pick any reputable co-location facility in each of your regional centers. The draw-back with co-location is that you have to actually have boots on the ground in that city, though. Another option is "fully managed" services, where you tell the data center staff what to do. Costs more, and the data center staff is not focused only on your application, but it avoids having to hire people all over the world.

There are a variety of managed hosting providers: <a href='http://www.digitalrealty.com/'>Digital Realty</a>, <a href='http://www.rackspace.com/managed_hosting/configurations/'>rackspace.com</a>, <a href='http://www.peer1.com/managed-hosting'>peer1</a>, <a href='http://www.business.att.com/enterprise/Family/hosting-services/enterprise-managed-hosting/?GUID=2DE4A87A-A621-41A9-B598-E7C8787B5224&WT.srch=1'>AT&T managed hosting</a>, and many more.

This is my point: all the back-haul goes across the same wires. My expectation is that it wonn't matter much if you let the user's ISP figure out how to get the packets to your central data center, versus if you let the user's ISP figure out how to get the packets to your regional data center, and your regional data center then figures out how to get the packets to your central location, as long as the user's ISP is big enough to have reasonable peering agreements with all the big carriers.

For the average case, going into your regional center and then back out again will likely just add a little bit of latency.

In some degenerate cases, an ISP may have its own back-haul, which it oversubscribes and runs on a shoestring, and it might prefer to try to send the user's packets through its own wires to the nearest IX to your central server. In that case, re-branding the packets onto peering exchanges that you pay for may improve quality. I would expect the share of customers where this matters to be significantly less than 10%, though -- that's an expectation of mine that seems to be significantly different from your expectation. You seem to expect that this will be the common case. If you actually get hard data on this (in any part of the world) I'd be very interested in seeing the actual outcome!

I still think the real quesiton is whether the additional services will be worth it and will gain traction, so I still suggest trying to figure that out, within a single data center, first.
enum Bool { True, False, FileNotFound };
OK, I finally get your line of thought regarding my solution.
Sadly, it's basically dismiss the solution(I say sadly because your opinion is more educated and chances are that you are right).

The question you asked at the end of your reply is not the right one.
Without the ability to create a network like we talked about in this thread, I can't execute my key service in a way which will be good enough to be worth it.
The assumption is that if I CAN create such a network, it will be worth it. Even if the cost of creating and maintaining such a network are high.
Proving this point is another part of my business plan, part which is probably not related to this forum.

If we abandoned the solution we talked about so far, do you have any creative hypothesis on how to create such a network?
Such a network does not exist so far, so obviously it needs to be something creative.

edit:
Regarding your comment about "SoftLayer" and other co-location providers:
I don't think all the co-location/dedicated servers networks are created equal. POPs location(which "exchange point"), peering agreement with carries, backbone connection between POPs, etc. These stuff should have a major impact on how fast the network would be(end user to POP, and POP to POP).
I don't think all the co-location/dedicated servers networks are created equal.[/quote]

Agreed! You have to first figure out what you actually need, and then talk to the centers that exist where you need a presence, to figure out what the right choice is. You may end up with a large firm with multiple centers in the world, or with a number of smaller centers in different locations.

At my current company, we use Digital Realty Trust, and they're OK. I've heard good things about Peer1, too. I know Rackspace mostly for their cloud solutions, so I don't know much about their managed offerings.

Here's another question: Is your main service only going to work if you get a particular latency, or is it only going to work if you get a particular improvement over typical customer latency? Will it work if some customers get a lot of improvement, and other customers get nothing at all? What is the driver behind the focus on latency?
enum Bool { True, False, FileNotFound };
The drive behind the focus on latency is in order to provide a good gaming experience.
Most of the relevant scientific data I read, talk about 100ms as the threshold for FPS games, so this is "the number to beat"(it's lower other game types).
This is the same reason for why the service will work only if I can get the vast majority of players within the network to have <100ms RTT.

I have no competition with this service, so I can't improve typical customer latency. It comes down to this:
If I can create a network like I described over this thread, I can create the service that I want, and I believe it will be profitable(it's a different discussion). If I can't create such a network - I can't create the service that I want, and I can forget about the project.

As I said, smaller centers in different locations will not be good enough for me. The service won't succeed unless I could provide a network with good enough gaming experience(<100 ms RTT) for large area's, under a "single player pool". (I defined the area's as: 1. USA 2. Europe).

The simplest way to think about it is this(in order to illustrate things):
Creating a Counter-Strike gaming servers center in central location within the USA.
If I can be creative and create a network which will allow the vast majority of player within USA to have <100ms RTT between them and this game servers center, then I can create the service(s) that I want. If I can't, then it does not matter what else I will do, I can't provide the service(s) that I want, and the project is dead.
Beyond that: what the services are, whether it will be profitable or not, business model, etc is beyond the scope of this thread/forum, and obviously I won't talk about such stuff over a public forum.


Thanks smile.png
At my current employer we use various POPs around the country (including this one) along with high bandwidth, low latency private interconnects between them and T1s, DS3s, fiber etc. to the customer in order to enforce QoS policy from end to end for our hosted VoIP product. As hplus mentioned the biggest hit the average home internet connection faces is the pinball game the residential ISPs play with their packets. Packets will even occaisionally change routes on the ISPs network as the ISPs internal routing protocols dynamically try to balance traffic across their interconnects to save money. So unless you are requiring a T1 or some other private circuit between the end user and the closest POP, you won't be gaining anything significant by building a stand alone gaming network on an island in the cloud. The work around to this is to get interconnects with the local ISPs like Comcast, Time Warner, Verizon, Charter etc. so they prioritize traffic to your network but this is extremely cost inneficient unless you have enough paying customers using that ISP to make it worthwhile. Space, power and interconnects can also rapidly reach 7 figures a year in recurring costs when dealing with multiple data centers.

That being said, what you will need to build this should you choose to continue despite the above is to find a tier1 service provider like Level3 Communications that has multiple data centers all over the region in question. You will be hard pressed to find another service provider in the US that doesn't at some point utilize the Level3 backbone so I will mention them as a starting point although there are many smaller service providers that may be more cost effective.

Once you have a tier1 carrier, you purchase DIA circuits from them which are much cheaper than private circuits and have the added benefit of being "on-net" for the carrier. This means traffic between your POPs can potentially (with some calls to the right people) take optimized routes on their backbone instead of hopping over the public internet without paying for the much more expensive private circuits.

That is just a small part of getting started. There is still lots of network design which i won't even get into without more details on what the individual POPs will be responsible for. The networking equipment for multiple data centers alone is something else that can reach 7 figures very quickly depending on the requirements.
Evillive2

This topic is closed to new replies.

Advertisement