p2p games is this the way forward

Started by
15 comments, last by Themonkster 20 years ago
hi Guys, I have followed the discussion of people trying to create games that work on p2p networks and can support an large amount of number of players. I seem to recall the main issue is trusting the client. I think I might have the answer all though it does involve a central server this server does not do anything in the game. The idea is to use web services to validate the client via encrypted messages. using webservices would be a very cheap method of running a server as most web hosts can have web services run on them. not sure of how the validation could work but I''d just thought I get the idea out to see if anyone could get some use of it. thinking of writing an article on it for gamedev.
Advertisement
Nope.
quote:Original post by Tim Cowley
Nope.

For real.

---

What''s going on, foo?
mmm thanks for the feed back.

how about saying something more than one word and the reasons behind it.

[edited by - themonkster on March 19, 2004 4:48:26 AM]
If i''m reading your post right then ->
The problem isn''t validating the client, thats relatively simple with any sort of public-private key encryption. The problem lie in trusting what the client is sending you.

It wouldn''t be too hard to replace the data in the packets being sent, once you can do that and figure out what messages are doing what you can wreak havoc online. In a p2p situation you''ve got no way of running simple sanity checks the way a server might, you can still check through agroup decision or something similar but it will be far more complex and resource hungry to do so.

I haven''t really looked into p2p that much, so feel free to jump in and correct me.
One of the big problems from my own perception is validating what actually happens. There has to be a single process that decides who hit who first, what weapons missed, etc. For this reason, it seems best that there should be a single machine that detects all game events and sends them to all clients, including what takes damage from who, etc.

You may be able to trust what comes from a client, but from a timing standpoint (such as lag, lost packets and ping time), it is too easy for two machines to think that a single close call situation works out differently on each.

Battlezone 1 was a p2p multiplayer deathmatch game. When a guided missile is fired, the client whose controlling craft it is locked on to is given the charge of whether or not the missile hits. Problem is, the missile is the type that can be redirected to a different target if a heat signature passes in front of it, and the current client might not detect it when another one does. In this way, a single missile could take out two targets, baffling all three players (the player that fired, the player who was the intended target, and the player whose craft the missile reacquired and struck). For reasons like this, it is best for a single machine, the server, to make these kinds of decisions.

As for validating what is sent, I think most network games try only to send controls to the server, and only send controls that have on/off states and weapons that fire with a recharge time, so that clients can't spam the server with fire and move messages and get an advantage over the others. The clients can approximate their own game states for a time, but the server is the one that is in authority and has to keep the clients in sync. This is what Battlezone 2 does, but it still suffers from game states that suddenly change as the server corrects the clients.

The ability to have a single judge for situations becomes VERY important in sponsored tournaments. If your network paradigm can't handle the inconsistencies, your multiplayer will have a very short and sporatic lifetime.


[edited by - Waverider on March 19, 2004 2:06:08 PM]

[edited by - Waverider on March 19, 2004 2:07:49 PM]
It's not what you're taught, it's what you learn.
ah cool. what if and its a big if the messages were verified via the webservice before being sent to the other client.

quote:Original post by Themonkster
ah cool. what if and its a big if the messages were verified via the webservice before being sent to the other client.



nope. as long as something is running on the client machine it can be hacked and modified. what''s to stop me from coding a fake webservice that "validates" my packages and sends you correct looking ones? all that matters in the end is what the other client receives. as long as those packets "look" right then i can send you whatever i want to send you. that''s why you need a centralized server in an MMPOG setting that is validating the packets AND the actions of the players.

so in a p2p setting you would need to have every client validate every other persons actions. that''s a lot of overhead and solving that problem will be significant challenge for P2P MMPOGs
ah yeah but we could encrypt it and then someone would have to crack the encryption but I should imagine thats fairly easy with todays computers.

maybe webservices could be used in others ways like delivering xml levels or central score boards maybe even a trading game of sorts.

maybe a p2p network could be set up as a meeting place for gamers who want to play certain games of a p2p nature.

then game developers of these sorts of games could publish them there.



quote:Original post by Themonkster
maybe webservices could be used in others ways like delivering xml levels or central score boards maybe even a trading game of sorts.
Fine for any "intermittent play" game; no good for rapid response action types.

XML may be flexible and generic, but all that genericity and flexibility comes at a significant performance cost. There''s a reason that relational databases typically use fixed-length fields.

This topic is closed to new replies.

Advertisement