• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

magerX

Members
  • Content count

    13
  • Joined

  • Last visited

Community Reputation

117 Neutral

About magerX

  • Rank
    Member
  1. The drive behind the focus on latency is in order to provide a good gaming experience. Most of the relevant scientific data I read, talk about 100ms as the threshold for FPS games, so this is "the number to beat"(it's lower other game types). This is the same reason for why the service will work only if I can get the vast majority of players within the network to have <100ms RTT. I have no competition with this service, so I can't improve typical customer latency. It comes down to this: If I can create a network like I described over this thread, I can create the service that I want, and I believe it will be profitable(it's a different discussion). If I can't create such a network - I can't create the service that I want, and I can forget about the project. As I said, smaller centers in different locations will not be good enough for me. The service won't succeed unless I could provide a network with good enough gaming experience(<100 ms RTT) for large area's, under a "single player pool". (I defined the area's as: 1. USA 2. Europe). The simplest way to think about it is this(in order to illustrate things): Creating a Counter-Strike gaming servers center in central location within the USA. If I can be creative and create a network which will allow the vast majority of player within USA to have <100ms RTT between them and this game servers center, then I can create the service(s) that I want. If I can't, then it does not matter what else I will do, I can't provide the service(s) that I want, and the project is dead. Beyond that: what the services are, whether it will be profitable or not, business model, etc is beyond the scope of this thread/forum, and obviously I won't talk about such stuff over a public forum. Thanks [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]
  2. OK, I finally get your line of thought regarding my solution. Sadly, it's basically dismiss the solution(I say sadly because your opinion is more educated and chances are that you are right). The question you asked at the end of your reply is not the right one. Without the ability to create a network like we talked about in this thread, I can't execute my key service in a way which will be good enough to be worth it. The assumption is that if I CAN create such a network, it will be worth it. Even if the cost of creating and maintaining such a network are high. Proving this point is another part of my business plan, part which is probably not related to this forum. If we abandoned the solution we talked about so far, do you have any creative hypothesis on how to create such a network? Such a network does not exist so far, so obviously it needs to be something creative. edit: Regarding your comment about "SoftLayer" and other co-location providers: I don't think all the co-location/dedicated servers networks are created equal. POPs location(which "exchange point"), peering agreement with carries, backbone connection between POPs, etc. These stuff should have a major impact on how fast the network would be(end user to POP, and POP to POP).
  3. As I said, I gave SoftLayer as an example because they had great explanation about their services(inc hardware, latency, etc) which made it possible to give the discussion more of a real life expectations. I will look for a company/network which is at least as good as SoftLayer but allow colocation. If you know any, let me know. I look on this with more of a scientific mindset - first I defined the problem, now i'm looking for good hypothesis and defining my outcome expectations. The next step will be testing(before full deployment). From what I read, you have knowledge and experience in the field - so it will be helpful to hear your outcome expectations(and maybe adjustment to my hypothesis). Say what you think, it's just hypothesizing and guessing expected outcomes - you don't have to be 100% right. Your opinion is obviously more educated than mine, which is why I opened this discussion [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] Thanks a lot, I really appreciate your comments. PS I see there are a lot of views on this thread, so anyone else who have an opinion about the matter is welcome to join the discussion.
  4. I did more reading and some research for relevant networks and I came up with "SoftLayer". They only offer dedicated servers, while I will probably look for colocation if I will go with this method - so it's probably not the network I will choose(unless I will sign some partnership deal with them). Despite that, their network is perfect to demonstrate what we mean(me and starbasecitadel). This is mainly because on top of the explanations about their network and services, they publish their network latency publicly. This can take our discussion to a more practical, real world level. (*i'm assuming SoftLayer is not the fastest network in the world, and that there are network which are at least as good as they are(latency wise) but also offer colocation. So I will talk like I can have my own hardware in the network POPs, even though in this specific company I generally can't) Here is their network distribution(and specific specs): [url="http://www.softlayer.com/about/network"]http://www.softlayer.com/about/network[/url] Here is their real time pop-to-pop latency: [url="http://lg.softlayer.com/"]http://lg.softlayer.com/[/url] Also, here is a blog post they wrote about their general "policy" of 40ms between end-user and their POPs: [url="http://blog.softlayer.com/2011/globalization-and-hosting-the-world-wide-web-is-flat/"]http://blog.softlaye...de-web-is-flat/[/url] We are talking about USA network region only. 1. With their server distribution in the USA, most of the end users in any location within the USA will have lower than 40ms latency between them and the closest POP. 2. Under normal circumstances, Dallas POP have latency which is lower than 59ms to any other POP in the network(on most cases this is much lower). Based on the above, this is what I will do: 1. Have my game servers center in the Dallas POP (most central place, latency based, according to the SoftLayer IP Backbone "Looking Glass") 2. In all of the other POPs, I will have routers(and other hardware) that will receive and forward TCP/UDP traffic from end users to my game data center in Dallas. Based on the numbers that SoftLayer publish, I should have a network which has <99ms latency between most of the end-users within the US to my Dallas game servers center. I have previously talked about 80ms as the "magic number", but after reading relevant research I believe that <100 is good enough(even for FPS games). Of course there are MANY practical things that need to be taken care off, but i'm not a network professional. I'm looking for a solution which is theoretically possible, and smart, in order to create network with the characters that I need. After I will find such a theoretical solution, I will consult(and then hire) network professionals in order to work on it in a practical manner. What do you think?
  5. @hplus0603 My key service is not something which in revolutionary in a sense where nobody thought about something like that before, also it's not about developing new technology "star". My idea is about "compiling" number of existing services in a different way, and adding a "key service" which is not being provided yet. Providing this key service and "compiling" the other services in the way I think it should be, will cause the sum to be much more than its parts and be revolutionary. Due to the way the industry developed in recent years, at this point - such service/s(and the way they will be "compiled") will answer the need of a lot of customers AND will provide an added value for game publishers. The "key service" is on the edges of game publishers companies responsibility(or what they would want to deal with), which cause somewhat of a blind spot that resulted in this service not being provided yet. I know NDA are not common these days, but you can agree with me that there is a different between pitching to VC, and exposing my ideas over a public forum. This is why I will not go into details about it here. Also, this is not relevant to this forum or to the problem i'm trying to solve in this thread. The reason I need to group players based on regions(like battle.net) and not based on ping(like Live!) is due to the nature of the service(s) I will provide. It won't fit to my purposes if I will go with the Live! scheme. Essentially what i'm trying to do(in the netwrok part, which we talk about in this thread) is to create a virtual geographic grouping for the largest possible pre-define geographical area. This is while providing low enough RTT for all the players in this pre-defined area under single player pool concept. Hopefully the entire USA, but if it's not possible - then grouping it for smaller networks(for example: 1.USA east 2. USA west) will be good enough. The 80ms number is the number to beat because from my reading 80ms-100ms is the threshold for good game experience in FPS games(other game types can tolerate lower RTT, so it will obviously be good enough for them as well). It might change later on, but I had to come up with a number for the sake of the discussion, and in general, this number seems like a good number to beat. I'm not sure if I fail to explain my solution proposal or if i'm missing the point you are trying to tell me. I will give it another try: (*The solution i'm looking for is not about putting new cables in the ground or something like that. What i'm looking for it to utilize existing infrastructures, networks, hardware, software, etc in a different way) (**please don't take my examples of companies or even technologies as "the way to go". I am trying to explain what I mean without much background in the field so I go by what I read and understand. Try to get to the root of what I mean and not the specific). -My general assumption is that internet communication(we are talking inside the USA in our discussion) is not efficient, and go through a lot of hops from one point to another(for example). This is why we are not near optimal RTT in most cases from point A to point B. -My second assumption is that there are private networks which has their private network centers in number of strategic locations all over the USA(near key exchange points where major backbone providers peer with each other for example) and have a peering agreement with major carriers. This approach allow them to have near optimal RTT between their centers, and possibly better connection from their center to the end-users(though i'm not sure about this part). My idea is to utilize such network in order to lower the RTT between players all across USA and my ONE central game servers center. I'll do it by leasing servers(and using reverse proxy as explained before- again, this is an example) or by actually leasing rack spaces(and put my own hardware) in all of this private network centers. Additionally, in a central location in the private network(Chicago for example) I will place my game servers centers. The games only running on this one game servers center. The other servers/rack spaces with my hardware are there to receive and forward TCP/UDP traffic between the players and the actual game servers center. These distributed servers/routers(peering points?) are there to leverage the private network near optimal RTT between their centers, for the purpose of faster communication for players who are far from my servers center. Here is a picture of an existing network to illustrate my point: [url="http://oi49.tinypic.com/35kojs8.jpg"]http://oi49.tinypic.com/35kojs8.jpg[/url] (*I edited Chicago center to be a blue dot) All of the dots are POPs/centers in an existing private network: the blue dot is where I will put my game servers center, the red dots are the other routers/peering points which receive and forward TCP/UDP traffic between the players and my game servers center. Since this internal private network has near optimal RTT between the centers, if you compare two situations: 1. A player from LA who connect directly to my Chicago game servers center 2. A player from LA who connect to my LA router/peer point and then the packets travels in near optimal RTT form my LA router/peer point to my game servers center Obviously the overall RTT will be lower in the second case. And by this, I created a network which can provide low enough RTT for larger than normal geographical areas under single player pool concept. Does it make sense now? Is it possible, realistic and smart? Am I missing a crucial part which will make it not a good idea(highly possible due to the lack of my technical knowledge. This idea is mainly logic based after few weeks of reading)? Alternatively, do you have a better idea to create the kind of network that I want? Up until now we talked about why I need it this way, alternatives to what I need, and finding work-around's. If the network I descirbed is giving as what I need, do you have a creative idea on how to create such a network?
  6. @hplus0603 [b]First I will address the why(value proposition):[/b] The network itself is the core which I plan to build several services on. These services does not exist today and for obvious reason I do not wish to share them on a public forum. So, the network I want to build is a crucial mean in order to provide the services that I want. For the discussion it does not matter what these services are, it's enough to know that the purpose of the network is to allow online multiplayer gaming in large geographical areas under single player pool concept. [b]Live! network:[/b] As I said, it is not suitable for my purposes because despite the fact that everyone can play with everyone all around the world, IN PRACTICE, if all of the players in the specific game want to have low enough RTT - they be playing with players which are close to them geographically. So even though it allow wide geographical network, it is not practical solution for my problem because people will be matchmake with people who are close to them. I'm looking for practical solution where in large geographical area - everyone can player with everyone and have low enough RTT. [b]What I want to do:[/b] (Let's assume we are talking about Counter-Strike, because it's available and popular) 1. Define a network region. For our discussion we will say the entire USA is defined as one region. (I don't care about players outside the pre-defined network region). 2. Build a SINGLE game servers center in a central geographical location within this region(for our discussion - USA). 3. Create a "low latency gamers network" that will reduce the latency to <80ms RTT between players all across USA to my game servers center. (with that I will create a "single player pool" - i.e. everyone can play with everyone within the USA and have RTT of <80ms = there are no servers with low/high ping because everyone connect to the same servers center.) The problem is how to create this "low latency gamers network" in order to achieve <80ms RTT between players all across USA and my SINGLE game servers center. starbasecitadel suggested finding existing great network(like Ubiquity for example), and do one of two things: 1. Lease servers on each of their geographically distributed servers. Then, create reverse proxy servers on each of these nodes(using nginx for example). These are not the game servers, they are like routers/peering points that forward TCP/UDP traffic to the actual game servers center - through extremely fast internal long distance network(like Ubiquity for example). 2. Have the same setup as #1, but instead using reverse proxy servers, lease actual rack spaced filled with smart routers. (*Look at starbasecitadel post for more in depth explanation) I hope things are clearer now. If not - let me know and I will try to explain better. If it is, i'm looking forward to hear your opinion and thoughts :-)
  7. @hplus0603 Look at starbasecitadel post, he articulated better what I want to do(though the assumptions I wrote are still valid). The way you re-phrased the question I should have asked is probably appropriate as well. More importantly, look at his solution. This is exactly what I meant when I wanted to leverage existing ADN servers/POPs to lower the latency under single player pool concept. So I guess existing ADN out, and custom ADN in(physical routers, reverse proxy servers or any other solution - I will probably still need to leverage an existing network like Ubiquity). I read about Live! architecture before, and as you also wrote, it uses player-hosted game server, and create "clique" of player as a network. Therefore, this solution is not good for me for because I want to create a network where everyone can play with everyone with low enough RTT(whether it will be USA as a 1 network, or braking it to couple of networks). If you can give your opinion and thoughts about starbasecitadel solution it will be very nice, I think it's the best approach(and the same thing I meant to when I talked about using commercial ADN). Thanks again.
  8. @starbasecitadel You got what problem I'm trying to solve, and also the general solution I was thinking about(my last 2 reply's). Thanks for making it more clear(as I said, I do not have much network knowledge). I don't speed of light should be an issue. San Francisco to NY theoretical limit RTT is 40ms(calculated based on: distance, speed of light and data travel through fiber). If we are talking about game servers center in central location, and client-server architecture, the data only need to travel half of that distance. And we are talking about coverage of the entire USA under one player pool(i.e. one game servers center). I don't think anyone need 20ms-50ms RTT in modern game(especially with all the modern in-game lag compensations mechanisms). People want the lowest possible, but under certain number they only notice the different because their RTT being shown to them. From reading Iv'e done, I think if I could achieve <80ms RTT for players which are farthest from the servers center - it will be perfect. The question now is whether this solution is practical, and can it result in a network which cover the entire USA under one player pool while everyone will have <80ms RTT? I know that to get a definite answer there need to be consider many factor, but what i'm asking for is an assumption based on logic, knowledge and experience. Also, the solution seems logical and not so complicate to implement. This is even for me, without much of network knowledge(I couldn't articulate it like starbasecitadel, but we meant the same solutions). The question is why didn't this solution been implement by big game companies?
  9. @hplus0603 We are talking about two different network types/network solutions. What you are talking about is classic content delivery network(CDN), which optimize static(one direction) content delivery like video(straming, VOD, etc). With this method, I will end up with small player pools like you mentioned - which is not what I want. What i'm referring to is application delivery network(ADN), which optimize dynamic(non cacheable) applications and content. Here are papers from two top comapnies which explain about this type of network/solution: [url="http://www.dcs.bbk.ac.uk/~geoff/Akamai_WAA_APP_ACC_Brochure.pdf"]http://www.dcs.bbk.a...CC_Brochure.pdf[/url] [url="http://www.edgecast.com/docs/ec-adn-datasheet.pdf"]http://www.edgecast....n-datasheet.pdf[/url] [url="http://www.ciosummits.com/media/pdf/solution_spotlight/CDNetworks_Making_applications_fly.pdf"]http://www.ciosummits.com/media/pdf/solution_spotlight/CDNetworks_Making_applications_fly.pdf[/url] As I said earilier, I haven't seen usage of such networks for game purpuses. Despite that, since ADN optimize RTT for other dynemic applications, I thought it might be possible(with some adjustments pethps). I know these network reduce a lot of RTT due to optimization of TCP, and that most of the games I am talking about using UDP. My idea is to leverage the other part of these networks: the network POPs, server distubition, communication between them, etc. Giving this is what I meant when I talked about using companies like Akami/CDNnetworks/Edgecast, if you can, please re-read my last message and give your opinion. Also, any other thoughts you have about the matter will be nice. Thanks for taking the time to respond :-)
  10. @hplus0603 Basically I had similar idea in mind. My idea was to create a big dedicated servers center in central geographical location within the pre-defined area(as explained), and then utilize existing application delivery network(or other kind of private network) from companies like Akamai or CDN network. From my readings, I couldn't find usage of such existing network for online multiplayer games, but since ADN optimize RTT for other bi-directional applications, I thought it might be possible. Such companies networks do what you were talking about, right(lease backbone fiber capacity, buy ISP bandwidth, etc)? If so, the question now is whether it will be possible to build a network large enough, which could fulfill all the assumptions/requirements I talked about in the first post. I know it depends on a lot of factors, but from your knowledge and experience - do you think it should be possible to create such a network(large area, single player pool, <100ms RTT, etc)?
  11. @hplus0603 I'm looking for ways to actually REDUCE the latency in the notwork. I'm not looking for mechanism to compensate for existing high latency. Therefore, I'm not talking about in-game mechanisms(or matchmaking), but instead, i'm talking about actual network related techniques and components to do so. This is because I want to use this network for existing games and not new ones(for example, counter strike). As I said, I'm also not talking about MMOs. So basically i'm talking about network infrastructure that will minimize the latency in large geographical areas(as explained at assumption #3), under single player pool concept(as explained in assumption #4,#5.#6). With this infrastructure, your could use any existing game in order to lower the latency between the clients and the servers center, and allow online multiplayer gaming in large geographical areas under single player pool concept. Obviously there will be a need to make some customization for each specific game, but hopefully these will be minimal ones. This is because what i'm talking about here is infrastructure for low latency communication between clients and the game servers center, and not any in-game mechanism. The simplest example I can think of to illustrate what I'm talking about is this: a. Creating big dedicated game servers center in central location within the USA(geographically speaking). The servers in this center run counter-strike games for example. b. Now what i'm asking about is creating a network infrastructure that will allow players from all over USA to connect to this servers center with RTT which is lower than 100ms. This way, everyone can play with everyone in a fair and enjoyable way. I hope it make sense...
  12. @KnolaCross Probably an assumption I forgot to add(as you addressed in your first sentence) is that we are also talking about large scale traffic wise. Let's assume few thousands to few ten thousands players at any given time. Also, I think I didn't explain myself properly: The games i'm talking about are major FPS/RTS/MOBA games(by Valve, Activision Blizzard, etc), not MMOs, and not new games I create(i.e.i'm talking about existing games which use client-server architecture). So what i'm asking is not about in-game mechanism to create the largest possible network, but instead, about creating the largest possible network(geographically) for existing games under client-server architecture and "single player pool" concept. I took Counter-Strike as an example for such a game because Valve allow you to create a private dedicate server. I hope it clarifies what I meant.
  13. I would like to start a theoretical debate about the possibility of creating a large geographical area network for online multiplayer gaming. I'm talking about client-server architecture, under a "single player pool" concept, meaning everybody on the network can play with everyone with "low enough" RTT. (*When I say theoretical, I mean that practical implementation of such network is not the issue of the discussion(coding, etc). The issue is whether it is possible and what's the best approach to do so.) Assumptions: 1. Budget is not an issue. Despite that, the solution need to be realistic. 2. We will take Counter-Strike game as an example("AAA" FPS game which allow private dedicate game servers) 3. Large geographical area optimally means the entire USA. If that's too large area, USA east and USA west for example will be good enough. (USA region is an example, we can talk about Europe in the same way). 4. "Single player pool" means that each pre-defined geographical area has only ONE game server(or more accurately game server center) in a central geographical location within the area we are talking about(either the entire USA as one area, or two areas with ONE game servers center for each: 1. USA east 2. USA west). Players within the pre-defined area(s) connect only to the game servers center in their specific area. 5. Players outside a pre-defined geographical area(s) can not connect to the specific game servers center of the area. 6. All of the players within the pre-define area, should have "low enough" RTT. "Low enough" RTT probably means under 100ms overall(<80 will be perfect). Is it possible to create something like that? What will be the best approach? Please share any thought you have on the subject...