Jump to content

  • Log In with Google      Sign In   
  • Create Account


magerX

Member Since 11 Oct 2012
Offline Last Active Oct 28 2012 04:03 PM

Posts I've Made

In Topic: Large geographical area network

24 October 2012 - 08:11 PM

The drive behind the focus on latency is in order to provide a good gaming experience.
Most of the relevant scientific data I read, talk about 100ms as the threshold for FPS games, so this is "the number to beat"(it's lower other game types).
This is the same reason for why the service will work only if I can get the vast majority of players within the network to have <100ms RTT.

I have no competition with this service, so I can't improve typical customer latency. It comes down to this:
If I can create a network like I described over this thread, I can create the service that I want, and I believe it will be profitable(it's a different discussion). If I can't create such a network - I can't create the service that I want, and I can forget about the project.

As I said, smaller centers in different locations will not be good enough for me. The service won't succeed unless I could provide a network with good enough gaming experience(<100 ms RTT) for large area's, under a "single player pool". (I defined the area's as: 1. USA 2. Europe).

The simplest way to think about it is this(in order to illustrate things):
Creating a Counter-Strike gaming servers center in central location within the USA.
If I can be creative and create a network which will allow the vast majority of player within USA to have <100ms RTT between them and this game servers center, then I can create the service(s) that I want. If I can't, then it does not matter what else I will do, I can't provide the service(s) that I want, and the project is dead.
Beyond that: what the services are, whether it will be profitable or not, business model, etc is beyond the scope of this thread/forum, and obviously I won't talk about such stuff over a public forum.


Thanks Posted Image

In Topic: Large geographical area network

24 October 2012 - 03:00 PM

OK, I finally get your line of thought regarding my solution.
Sadly, it's basically dismiss the solution(I say sadly because your opinion is more educated and chances are that you are right).

The question you asked at the end of your reply is not the right one.
Without the ability to create a network like we talked about in this thread, I can't execute my key service in a way which will be good enough to be worth it.
The assumption is that if I CAN create such a network, it will be worth it. Even if the cost of creating and maintaining such a network are high.
Proving this point is another part of my business plan, part which is probably not related to this forum.

If we abandoned the solution we talked about so far, do you have any creative hypothesis on how to create such a network?
Such a network does not exist so far, so obviously it needs to be something creative.

edit:
Regarding your comment about "SoftLayer" and other co-location providers:
I don't think all the co-location/dedicated servers networks are created equal. POPs location(which "exchange point"), peering agreement with carries, backbone connection between POPs, etc. These stuff should have a major impact on how fast the network would be(end user to POP, and POP to POP).

In Topic: Large geographical area network

24 October 2012 - 04:22 AM

As I said, I gave SoftLayer as an example because they had great explanation about their services(inc hardware, latency, etc) which made it possible to give the discussion more of a real life expectations.
I will look for a company/network which is at least as good as SoftLayer but allow colocation. If you know any, let me know.

I look on this with more of a scientific mindset - first I defined the problem, now i'm looking for good hypothesis and defining my outcome expectations. The next step will be testing(before full deployment).
From what I read, you have knowledge and experience in the field - so it will be helpful to hear your outcome expectations(and maybe adjustment to my hypothesis).
Say what you think, it's just hypothesizing and guessing expected outcomes - you don't have to be 100% right. Your opinion is obviously more educated than mine, which is why I opened this discussion Posted Image

Thanks a lot, I really appreciate your comments.


PS
I see there are a lot of views on this thread, so anyone else who have an opinion about the matter is welcome to join the discussion.

In Topic: Large geographical area network

23 October 2012 - 06:27 PM

I did more reading and some research for relevant networks and I came up with "SoftLayer".
They only offer dedicated servers, while I will probably look for colocation if I will go with this method - so it's probably not the network I will choose(unless I will sign some partnership deal with them). Despite that, their network is perfect to demonstrate what we mean(me and starbasecitadel). This is mainly because on top of the explanations about their network and services, they publish their network latency publicly. This can take our discussion to a more practical, real world level.
(*i'm assuming SoftLayer is not the fastest network in the world, and that there are network which are at least as good as they are(latency wise) but also offer colocation. So I will talk like I can have my own hardware in the network POPs, even though in this specific company I generally can't)

Here is their network distribution(and specific specs):
http://www.softlayer.com/about/network
Here is their real time pop-to-pop latency:
http://lg.softlayer.com/
Also, here is a blog post they wrote about their general "policy" of 40ms between end-user and their POPs:
http://blog.softlaye...de-web-is-flat/

We are talking about USA network region only.
1. With their server distribution in the USA, most of the end users in any location within the USA will have lower than 40ms latency between them and the closest POP.
2. Under normal circumstances, Dallas POP have latency which is lower than 59ms to any other POP in the network(on most cases this is much lower).

Based on the above, this is what I will do:
1. Have my game servers center in the Dallas POP (most central place, latency based, according to the SoftLayer IP Backbone "Looking Glass")
2. In all of the other POPs, I will have routers(and other hardware) that will receive and forward TCP/UDP traffic from end users to my game data center in Dallas.

Based on the numbers that SoftLayer publish, I should have a network which has <99ms latency between most of the end-users within the US to my Dallas game servers center.
I have previously talked about 80ms as the "magic number", but after reading relevant research I believe that <100 is good enough(even for FPS games).

Of course there are MANY practical things that need to be taken care off, but i'm not a network professional.
I'm looking for a solution which is theoretically possible, and smart, in order to create network with the characters that I need. After I will find such a theoretical solution, I will consult(and then hire) network professionals in order to work on it in a practical manner.

What do you think?

In Topic: Large geographical area network

22 October 2012 - 07:48 AM

@hplus0603
My key service is not something which in revolutionary in a sense where nobody thought about something like that before, also it's not about developing new technology "star".
My idea is about "compiling" number of existing services in a different way, and adding a "key service" which is not being provided yet. Providing this key service and "compiling" the other services in the way I think it should be, will cause the sum to be much more than its parts and be revolutionary.
Due to the way the industry developed in recent years, at this point - such service/s(and the way they will be "compiled") will answer the need of a lot of customers AND will provide an added value for game publishers.
The "key service" is on the edges of game publishers companies responsibility(or what they would want to deal with), which cause somewhat of a blind spot that resulted in this service not being provided yet.
I know NDA are not common these days, but you can agree with me that there is a different between pitching to VC, and exposing my ideas over a public forum. This is why I will not go into details about it here. Also, this is not relevant to this forum or to the problem i'm trying to solve in this thread.

The reason I need to group players based on regions(like battle.net) and not based on ping(like Live!) is due to the nature of the service(s) I will provide. It won't fit to my purposes if I will go with the Live! scheme.
Essentially what i'm trying to do(in the netwrok part, which we talk about in this thread) is to create a virtual geographic grouping for the largest possible pre-define geographical area. This is while providing low enough RTT for all the players in this pre-defined area under single player pool concept. Hopefully the entire USA, but if it's not possible - then grouping it for smaller networks(for example: 1.USA east 2. USA west) will be good enough.
The 80ms number is the number to beat because from my reading 80ms-100ms is the threshold for good game experience in FPS games(other game types can tolerate lower RTT, so it will obviously be good enough for them as well). It might change later on, but I had to come up with a number for the sake of the discussion, and in general, this number seems like a good number to beat.

I'm not sure if I fail to explain my solution proposal or if i'm missing the point you are trying to tell me. I will give it another try:
(*The solution i'm looking for is not about putting new cables in the ground or something like that. What i'm looking for it to utilize existing infrastructures, networks, hardware, software, etc in a different way)
(**please don't take my examples of companies or even technologies as "the way to go". I am trying to explain what I mean without much background in the field so I go by what I read and understand. Try to get to the root of what I mean and not the specific).
-My general assumption is that internet communication(we are talking inside the USA in our discussion) is not efficient, and go through a lot of hops from one point to another(for example). This is why we are not near optimal RTT in most cases from point A to point B.
-My second assumption is that there are private networks which has their private network centers in number of strategic locations all over the USA(near key exchange points where major backbone providers peer with each other for example) and have a peering agreement with major carriers. This approach allow them to have near optimal RTT between their centers, and possibly better connection from their center to the end-users(though i'm not sure about this part).

My idea is to utilize such network in order to lower the RTT between players all across USA and my ONE central game servers center. I'll do it by leasing servers(and using reverse proxy as explained before- again, this is an example) or by actually leasing rack spaces(and put my own hardware) in all of this private network centers. Additionally, in a central location in the private network(Chicago for example) I will place my game servers centers.
The games only running on this one game servers center. The other servers/rack spaces with my hardware are there to receive and forward TCP/UDP traffic between the players and the actual game servers center. These distributed servers/routers(peering points?) are there to leverage the private network near optimal RTT between their centers, for the purpose of faster communication for players who are far from my servers center.

Here is a picture of an existing network to illustrate my point:
http://oi49.tinypic.com/35kojs8.jpg
(*I edited Chicago center to be a blue dot)
All of the dots are POPs/centers in an existing private network: the blue dot is where I will put my game servers center, the red dots are the other routers/peering points which receive and forward TCP/UDP traffic between the players and my game servers center.
Since this internal private network has near optimal RTT between the centers, if you compare two situations:
1. A player from LA who connect directly to my Chicago game servers center
2. A player from LA who connect to my LA router/peer point and then the packets travels in near optimal RTT form my LA router/peer point to my game servers center
Obviously the overall RTT will be lower in the second case. And by this, I created a network which can provide low enough RTT for larger than normal geographical areas under single player pool concept.


Does it make sense now? Is it possible, realistic and smart? Am I missing a crucial part which will make it not a good idea(highly possible due to the lack of my technical knowledge. This idea is mainly logic based after few weeks of reading)?
Alternatively, do you have a better idea to create the kind of network that I want? Up until now we talked about why I need it this way, alternatives to what I need, and finding work-around's. If the network I descirbed is giving as what I need, do you have a creative idea on how to create such a network?

PARTNERS