• Content count

  • Joined

  • Last visited

Community Reputation

100 Neutral

About gbLinux

  • Rank
  1. Quote: gbLinux, I now realize you are a troll. However, you've done a great job of skirting the limits of what might be considered a reasonable line of questioning, so I've let the thread go on this long. As far as trolls go, you're really skilled. (Or, as far as normal social humans go, you're very unskilled -- it's hard to tell the difference online) Because you do not take advise that's given to you, and do not actually draw the learning from the posts that have been made (including posts with clear numbers, statistics, and technical explanations), this discussion will go nowhere further. hplus0603, troll? what kind of insult is that? what are you talking about and why in the world would you close public discussion without caring if other people wanted to talk about it? what is this? p2p beats server based model and i have proven it... even if not this is " Multiplayer and Network Programming" forum so this discussion very much belongs here, sheesh!! let other people talk about it, you grinch. packet overhead= 30byte packet client input= 10byte packet full entity state= 20byte ---------------------------------- C/S, client upload per frame: 40 C/S, client download per frame: 50 * N_clients P2P, peer upload per frame: 50 * N_peers P2P, peer download per frame: 50 * N_peers *** 16 players, per one client, per second, 60Hz C/S: upload= 40*60 = 2400 bytes/s C/S: download= 50*16*60 = 48000 bytes/s P2P: upload= 50*16*60 = 48000 bytes/s P2P: download= 50*16*60 = 48000 bytes/s *** 32 players, per one client, per second, 60Hz C/S: upload= 40*60 = 2400 bytes/s C/S: download= 50*32*60 = 96000 bytes/s P2P: upload= 50*32*60 = 96000 bytes/s P2P: download= 50*32*60 = 96000 bytes/s of course, with the huge difference that P2P could actually work on real, uninterrupted 60Hz, while server will lose more time as the number of clients increases, and can hardly achieve 60Hz communication update rate with any of the clients, maybe not even over LAN.. but how would i know? 1.) does p2p have shorter traversal path than server-based model? 2.) would parallel computing even further speed up latency by getting rid of serial computation server does? 3.) can p2p run on much faster frequency (60HZ and more) due to the nature of uninterrupted, streamed, asynchronous updates?
  2. Quote:Original post by hplus0603 Quote:why do you care how much you saturate WWW in general? Because if World of Warcraft (a single game) were P2P, it alone would consume more than the available bandwidth of the entire Internet. gbLinux, I really suggest you go read through the entire Forum FAQ for this forum, including following all the links. Start with question 0, make sure you internalize the science behind it, then go to question 1, make sure you internalize that, ... Then come back, and we can hold a discussion that makes sense, and where you don't come out looking like a lazy n00b. You've made so many beginner mistakes in your analysis it's not even funny, yet you complain that the experienced answers don't make sense to you. For an example of the latest mistake: You assume that geographic distance equates to network distance. That's not true at all -- when I ping a server in San Francisco from Redwood City (a distance of about 25 miles north), the packet goes through San Mateo, Sacramento (80 miles away), San Jose (30 miles south) and from there finally to San Francisco. If you're not familiar with the SF Bay Area, look it up on a map. Geographic distance has very little to do with network distance at the regional and lower levels. Hence, why we talk about "back-haul" and "long-haul" in the discussion. huh. why complicate? YES/NO: 1.) does p2p has shorter traversal path than server-based model? 2.) would parallel computing even further speed up latency by getting rid of serial computation server does? 3.) can p2p run on much faster frequency (60HZ and more) due to the nature of uninterrupted, streamed, asynchronous updates? i rest my case... and i will gladly answer any question and try to explain if there is still anyone who can not understand this. Quote: Because if World of Warcraft (a single game) were P2P, it alone would consume more than the available bandwidth of the entire Internet. what are you talking about? we are not talking about 'broadcast packets' any more, i think conclusion is those packets would be lost on WWW. are you asserting that all 10mil WoW clients, play on one server? how many players maximum can one WoW server host? "bandwidth of the entire Internet", does that even make sense? that has nothing to do with anything. you should only be concerned about upload/download bandwidth per client. -- take 32 clients, take some average packet size, calculate traversals and latency, then tell us what's upload/download bandwidth for p2p and server based model, can you do that? as long as every peer/client stays within it's limits, that's all what maters, and than p2p wins over server model on sheer SPEED provided by constant, uninterrupted streaming flow of asynchronous updates, isn't that so? [Edited by - gbLinux on August 29, 2009 6:37:51 PM]
  3. Quote:Original post by Antheus Quote:Original post by gbLinux shortest route will yield fastest path? (YES/NO) The shortest route between London and New York is straight line. It would take decades or centuries to dig a tunnel through there. Then next shortest route is over surface. It takes about 3 days of sailing. The longest route is through the air - and only takes 6 hours or so. Also - stuck in gridlock vs. subway+walking. So I'm going to go with: No. i thought you were referring to yourself as 'networking expert', and now instead of using paths of network infrastructure you would rather dig tunnels?! please, if this is your profession... don't you think its kind of important to figure this thing out completely? or, at least, don't try to dig it down without good reason, thanks. it should be perfectly clear to novice and experts alike, p2p has shorter traversal route, so it simply has to be able to communicate faster, plus having all the benefits of streamed, non-interrupted, parallel processing. this will not only allow far better frequency, but asynchronous updates will smooth many visual glitches automatically and the streaming nature of incoming data would even further make the whole experience more fluid. if you think i'm wrong about something, then please point it out. [Edited by - gbLinux on August 29, 2009 5:05:34 PM]
  4. Quote:Original post by Matias Goldberg Quote:Original post by gbLinux B C s 30km 20km A D AB=BC=CD=DA = 30km As=Bs=Cs=Ds = 20km *** P2P, one frame loop, per ONE client round-trip: ~33km (2x 30km, 1x 40km) packets sent: 3 packets received: 3 total packets per client: 6 *** SERVER, one frame loop, per ONE client round-trip: 40km packets sent: 1 packets received: 4 total packets per client: 5 Round-trip for P2P should be 40km (2x 30km, 1x60km / 3) assumming it's an average what you did. what?? no, take client A for example: A->B = 30km A->C = 40km A->D = 30km how do you keep coming with 60km, diagonally opposite clients can talk to each other as well, this is not some RING thing, or something. Quote: Second, and more importantly, in computers there is no average in this stuff. The whole system works as slow as the lowest component in that system (what is called, a bottleneck), not as it's average. no. there actually is an average here, especially if we decided to sync all the peers some time in the past, just like servers do. this system does not work as the lowest lowest component allows because updates are asynchronous there is no FAST/SLOW here, no waiting - you only have FURTHER and CLOSER, further is not SLOWER it is only more behind in the past, but the rate of update is NON INTERRUPTED, has CONSTANT streaming flow. theoretically working on FULL 60Hz and more, where frequency only depends on upload bandwidth, size of packets and number of peers. you are describing problems server-based approach have. Quote: This means the P2P will go as slow as if it were running at 60km, because A has to wait for D. There are some clever tricks (lag-compensation) that help A predicting what D should have done and when data has arrived fix the estimations. Nevertheless, in the long term, A will have eventually need to stop and wait because D can't keep it up (or vice vesa) And B and C are caught in the A & D's delays, so they have to wait too to avoid getting too far in the simulation from and A & D. no, no waiting here. imagine 8 people have radar devices that can read signal from similar device and display their location. all the devices broadcast their location to all other devices and all the devices update location of all other device as the the signal arrives. now this signal never stops and the latency here is directly proportional ONLY to distance.
  5. Quote: I'm going to leave this discussion with one observation. It is not uncommon for a novice to a particular field to believe he's come up with a novel idea that nobody's ever thought of before. He can't see any problems with his idea, and he gets frustrated because so-called "experts" will dismiss it, almost out-of-hand. This is not because the experts lack imagination, rather it is because the experts can see the inherent flaws in the idea that a novice - from a lack of experience - will miss. im novice and i can not see any problems with p2p, so i come here to ask expert (you) to explain it to me, but you only told me p2p is bad, it has problems, it's this and that, it can't work... but no explanation, no analysis, no numbers.. nothing, and now you gonna leave? i suppose you realized you are wrong, after all what kind of expert does it take to realize shortest route will yield the fastest path? Quote: Some people believe that being a novice can be an advantage because you're not hampered with pre-conceived notions of what is and is not possible, but that is not true. Perhaps you can provide one or two examples of a novice who actually has come up with a novel idea that no "expert" would have considered, but for each of those, I can point out tens of thousands of "novice" ideas that fall down in the real world. Do not be discouraged, however. We were all novices once! (Not that I'm an expert by any stretch of the imagination, of course!) My suggestion would be to keep your idea in the back of your mind as you learn all you can about implementing networked applications in the real world - you will be surprised at how complex it actually is. shortest route will yield fastest path? (YES/NO) hahaa... bye, bye.
  6. Quote:I think it's already been establed that a "peer to peer" model is only useful if all the clients are geographically close (otherwise latency is unavoidable, and the latency of Sydney-LA, say, far outweighs any latency a server may introduce). i'm pretty sure this has not been established, in fact most of the stuff said here is pretty vague or was refuted by subsequent posts. we didn't even know if p2p multiplayer games were very common or very rare, even now i'm not quite sure about it... hahaa. anyhow, set any example you want pick any cities you want and we will calculate traversal for both models, i assure you latency will be LESS with p2p as long as bandwidth does not come into play. btw, does anyone know about this Xbox 360 p2p network?
  7. i just realised i was mixing two concepts. it is important to differentiate line Speed and Bandwidth. even tho referred to as a "speed" bandwidth is actually not what matters to "PING", it does not measure the speed with which packets travel or influence how long would it take to arrive at destination... for that we need some distance and the number of routers and checkpoints it must stop and visit on the way. say, if the speed of water molecules in some water-pipe is what packet speed is in network lines, then bandwidth is the width of that pipe, ie. how many packets can flow through it per second. but the speed, the speed is always about light speed, i suppose, minus all the time lost in routing... and that's pretty much all i know about this. what routers do and how much they slow packets down, that i don't know. anyhow, this is what is important: - if SPEED is the same for upload and download (forget bandwidth) between peers in p2p as it is between server and clients, than p2p simply must be that much faster (as explained with ABC). bandwidth only starts to matter with more with, say 16 or 32 players, and even than it heavily depends on just how much you can squeeze your communication info, so 32 players on p2p sounds quite doable with average ADSL connections. the conclusion: p2p game with 8 or less players must, simply must be faster than server-based game, given the same hardware and having some average ADSL connections. does anyone wanna challenge this conclusion? btw, the only thing i could google about "p2p multiplayer" is that some folks are complaining about some "laggy Xbox 360 P2P multiplayer", whatever is that all about, plus some mobile games are using it via bluetooth.. that's pretty much all. how peculiar. and, what now... hopefully someone will refute my conclusion so i don't have to be the one to produce the first online p2p multiplayer game in the world. [Edited by - gbLinux on August 29, 2009 2:35:08 AM]
  8. Quote:Original post by Matias Goldberg Hi! Quite an argument here. I just wanted to point about the ABC problem. I'll change it to: *** Source Snippet Removed *** AD = DC = 30km. SD = 17km Server approach: Info comming from B to D needs 34km P2P approach: Info comming from B to D needs 60km that seems wrong. those numbers were not coincidental, that's what you get with equilateral triangle on a flat surface and it's centre, you can't just imagine numbers... or perhaps you can, but that makes for distorted geo-surface. here is how it goes for equilateral quadrilateral, such as square and rhombus. B C s 30km 20km A D AB=BC=CD=DA = 30km As=Bs=Cs=Ds = 20km *** P2P, one frame loop, per ONE client round-trip: ~33km (2x 30km, 1x 40km) packets sent: 3 packets received: 3 total packets per client: 6 *** SERVER, one frame loop, per ONE client round-trip: 40km packets sent: 1 packets received: 4 total packets per client: 5 it cant' get better than this for server-based model, having server in the middle... it's only client's upload speed limit that can make server based model faster, as far as i can see.
  9. Quote: Quote:Original post by gbLinux *** 64 clients: p2p upload: 63 p2p download: 63 total: 126 *Each* client. You listed one. There are 64, and each needs to send 63 packets. So each update generates 4032 packets in the internet. i said per ONE client, per ONE frame... why do you care how much you saturate WWW in general? that does not influence latency, individual bandwidth or speed of p2p network in any way i can perceive. what does it matter? isn't that like if everyone started talking faster over the phone? would that "overload" the lines, slow down internet or something? Quote: that per client, per whole network, upload, download? Per entire shared state. Per game. That is total across all servers and all clients. In P2P, there are a total of 64 clients, each of them generates 63 packets it sends to all others. In C/S, each client sends one packet to server, and server sends on packet to each client. i'm afraid the information server sends to each client about each other client is much bigger than what clients send to server and what peers send to peers in p2p. it's more realistic to say that in C/S each client send one "packet" and receives as many "packets" as there are other clients, or in the worse case as many as there are dynamic entities of any kind in the client scene. we should only be concerned about upload/download stream per ONE client, not about some "internet saturation", unless someone can explain how that matters. Quote: If all clients are sent full updates, then P2P and C/S need about the same bandwidth. But P2P has so much more overhead. A user participating in 64-player P2P game over 1Mbit upstream would waste their entire bandwidth on packet overhead, and couldn't even send any usable data. well, i can't trust if you're using right numbers and formulas again. are you saying they would use alloted ammount of download data pemited by ISP, or that their UPLOAD SPEED would not be sufficient? so, i suppose server-based games host 64, maybe more players? what is the maximum number of players allowed per server on some of the most popular games?
  10. Quote: When there are 3, P2P needs to generate 9 packets, C/S only 6. When there are 64 clients, P2P needs to generate 4096 packets, while C/S only 128. that per client, per whole network, upload, download? that doesn't seem right, this is how i see it: - per one frame, per one client - *** 3 clients: p2p-model upload: 2 p2p-model download: 2 total: 4 server-model upload: 1 server-model download: 3 total: 4 *** 6 clients: p2p-model upload: 5 p2p-model download: 5 total: 10 server-model upload: 1 server-model download: 6 total: 7 *** 64 clients: p2p-model upload: 63 p2p-model download: 63 total: 126 server-model upload: 1 server-model download: 64 total: 65 do we understand each other? are we talking about same stuff here?
  11. Quote:Original post by Antheus Here are some numbers: 7, 22, 45, 88, 102, 344. Actually, the real ones are below: Quote:this situation can change rapidly as number of clients increases,The total number of packets in the network increases with n squared. So with 2 clients, there are 4 packets. With 16 clients, there are 256 packets. With 100 clients, there are 10,000 packets. Sent 60 times per second. To be completely fair, the number is n*(n-1), but for the purpose of this discussion, this doesn't matter beyond 3 clients. Quote:"upload", does that mean top speed would be limited by client ISP's upload speed limit? Yes. Or better yet, by each user's subscription plan's limit (128kbit - several Mbit). Quote:this is where server model might catch up, if not overrun p2p, i think... since the difference in upload/download speed is so great for most of the ADSL plans, as far as i know. any thoughts on this? Due to wonders of mathematics, we can determine when this would happen. The formulas are given above. For each update (60 times per second) Server client Server: N packets N * Each client: 1 packet Total number of packets: N + N P2P: N * Each client: N packets Total number of packets: N * N As far as number of packets goes: N * N < N + N N^2 < 2N N < 2 In other words, P2P is only more efficient as long as there are at most two peers. When there are 3, P2P needs to generate 9 packets, C/S only 6. When there are 64 clients, P2P needs to generate 4096 packets, while C/S only 128. Bandwidth requirements, 64 players P2P: Each player sends 10 bytes (+28 bytes packet overhead), 60 times a second, to 64 peers. This adds up to 38*60*64 bytes per second, or 145920 bytes per second. So each player needs to have ~1.2Mbit upstream. S/C: Each player sends 10 bytes (+28 bytes packet overhead), 60 times a second, to server. This adds up to 38*60*1 = 2280 bytes per second, or approximately 20kbit upstream (a dial-up modem provides it). Server however, needs to send full updates, at maximum rate same as P2P (often less), so ~1.2Mbit. Servers are typically hosted by operators in data centers, not in bedroom computers. But it gets worse. For each update, all of those packets are send over internet. The total bandwidth required by each model is: P2P: Each client sends to all others, so 1.2MBit * 64 = 76.8MBit S/C: 1.2Mbit + (64 * 0.02Mbit) = 2.48Mbit But now lets look at 256 player game: P2P: 38*60*256*256 = 1.14 Gigabits S/C: 38*60*256*2 = 8.9 Mbit Let's go further. On WoW server, there are about 3000 at any given time (except Tuesdays). P2P: 38*60*3000*3000 = 152 Gigabits (152,000 Mbits) S/C: 38*60*3000*2 = 104 Mbits (or the capacity of home broadband router) And this is just for a single shard - WoW has how many, hundred? See the problem with P2P, and why just a simple napkin calculation shows it simply doesn't work? Just because The Cloud is somewhere out there, it doesn't mean it's magic infinite space. The numbers listed above are the infrastructure requirements, which exceed total internet capacity of most countries. Edit: Software prefers bytes, but network gear prefer bits. So when talking about capacities, I multiply the bytes sent by 8 to obtain bits. Not entirely correct, but close enough - it's not what makes or breaks it. beautiful! thank you, i very much appreciate that. i'm stupid for that kind of thing and i had to see some numbers to be able to grasp it and have some picture about relations. now, let me chew on that for a while...
  12. Quote:Original post by hplus0603 Actually, almost no PC game is Peer-to-peer. While some games allow users to host without a central server, that merely means that the user doing the hosting is the topological server. Starcraft, for example, is explicitly *not* a peer-to-peer game. Quote:absolute optimisation and lowest possible hardware requirements I understand engineering. what I'm trying to say is that those two requirements are actually contradictory in certain terms. If you want the lowest possible latency at all costs, then that's not optimization, because the overall total cost goes up -- you require everyone to have a great Internet connection. Engineering is the art of making smart trade-offs to deliver the most end user value at the lowest possible cost. Low latency is "value," high bandwidth requirement is "cost." Bandwidth might also be considered a "hardware requirement," so reducing bandwidth means reduced hardware requirements. If you're really into P2P networking for games, I suggest you check out the VAST project (a link to which is in the Forum FAQ). ok, i think it's safe now to conclude that p2p multiplayer is non-existent or at least very rare. yes, actually it is contradictory, i agree. so, to put it more precisely - from the algorithm and software implementation point of view, as a programmer, im after speed - fast code, but in the terms of network architecture i'm interested in _theoretically and _potentially faster solution. in theoretical terms, i expect the future to get rid of the limits current network and other information infrastructures pose... so, simply said, without client upload speed limit i see no way server model would outperform p2p, and even now, as it is today, p2p might be very well suitable candidate to replace server based model for some specific, if not most, situations. i kind of have a feeling it's the 'security' what made and keeps p2p so wildly unpopular. thanks for that link, that's exactly what i was looking for... if anyone can find more such projects feel free to let me know.
  13. Quote:Original post by hplus0603 Quote:clients should be able to communicate with each other as much and as fast as they can do it, on average, with the server That's simply not true, for two reasons: 1) Simple math. The overall number of connections, and thus packet overhead, grows by N-squared in a P2P system, but is static at 1 for c/s for clients, and N for c/s/ for servers (although each client sees N connections up and N down in P2P, and 1 up, 1 down in c/s). For most games, the size of the packet headers can easily be as big or bigger than the size of the actual update data in a single "tick" update packet. I assume you're familiar with N-squared vs N vs constant requirements, and how they scale; if not, please use the appropriate Google or Bing services. 2) All consumer internet connections are asymmetric, where upload is 1/10th the bandwidth of the download. In a P2P set-up the bandwidth requirement is symmetric, whereas in c/s it's heavily skewed towards download for all the clients. Hence, all things being equal, c/s systems generally can allow clients to receive 10x more data than a P2P system, assuming the server has "infinite" outgoing bandwidth. I'm starting to not like your tone, and your lack of basic Googling before you start attacking answers people are trying to give you in this thread, though. Having operated commercial client/server games for years, that even include voice chat, I can say that bandwidth costs are very low on our list of concerns. Ability to attract and retain people is about a 1000x bigger concern for the P&L statement (given that you want numbers). The nice thing is that more users, while needing more bandwidth, also mean more income to pay for the bandwidth, and user income grows linearly, whereas bandwidth cost grows basically logarithmically. Leasing, cooling, power and rack space for the servers are also bigger costs than bandwidth, but still smaller concerns by an order of magnitude than customer acquisition, retention and service. ok, can you now please express the same thing with some real-world numbers and examples, say something like ABC only with 8 clients and compare... use some average server upload bandwidth and some average client upload/download ISP limits, some average data sizes/packets... compare, and then it will be much easier to understand, can you do something like that, please? i still believe there are lots of things which can turn out in favor for p2p, so ideally there would also be some article on the www where someone who actually tried it experimentally wrote some numbers about it... practice can sometimes reveal unexpected. after all, coincidence, luck and random chance are the main factors in the history of human inventions and discoveries.
  14. Quote: What did you mean by: Quote:Original post by gbLinux visual absurds that happen to each and every client with server-based approach In the server-client model the slowest player never affects the gameplay if the server is authoritative, unless the game is designed to make sure everyone is in sync and not lagging. (I've played a few games that do that). yes, kind of.. here is the article i was referring to: The Valve Developer Community, Source Multiplayer Networking: Quote: About using P2P in a game it's possible. You might want to try it and see how well it works for you. So yeah I'd write up a test program and see if you like how it works. so, after all... this p2p multiplayer has never been tried, is that what you're saying? and... i'm supposed to invent this thing now? c'mon, someone must have tried something about it. Quote:Quote:Original post by gbLinux - central server collects all the input from all the clients, calculates movement/collision then sends to each client the new machine state, ie. new positions of each entity, velocities, orientation and such, which then each client renders, perhaps utilising some extrapolation to fill in the gaps from, say network refresh rate of 20Hz to their screen refresh rate of 60 FPS. now, this would mean that game actually can only run as fast the slowest connected client can communicate with the server, right? That doesn't even make sense. The server is never limited by the slowest connection. It's just getting input and sending out packets. At no point does it have a "wait for slowest client". If it doesn't get input it doesn't get input. The only way you could get that effect would be if you've lock-stepped input, which I've never seen done before (RTS games don't even do that as far as I know). i was referring to what i explained later on - different client update rates make for unrealistic simulation and causes space/time paradoxes, therefore, ideally, game should run on frequency server can communicate with the slowest client. that is why there should be different servers for slow/fast connection clients. The Valve Developer Community, Source Multiplayer Networking: Quote: I use the server-client model. It allows me to get input from clients and update physics then build individual packets with full/delta states for each client such that each packet is small and easy for even the lowest of bandwidth players. The added latency you mention isn't noticeable. You might want to make some measurements and see if it's worth it. I'd like to see some test with 8 players where the one-way latency is recorded between each and every client. Then compare it to the RTT time between the server and client model. i'd rather read about someone else's measurements, but yeah that is exactly what i want to know. till then i can use simple ABC example and plug some more numbers... B 30km s 17km A C say, it takes one packet of information to communicate either plain client input or full entity state. *** P2P, one frame loop round-trip: 30km packets sent: 2 packets received: 2 total packets per client: 4 *** SERVER, one frame loop round-trip: 34km packets sent: 1 packets received: 3 total packets per client: 4 this situation can change rapidly as number of clients increases, but also depending on real packets size, as well as how upload/download speed limit posed by ISP affects the speed of outgoing and incoming packets. in p2p all packets are "upload", does that mean top speed would be limited by client ISP's upload speed limit? this is where server model might catch up, if not overrun p2p, i think... since the difference in upload/download speed is so great for most of the ADSL plans, as far as i know. any thoughts on this?
  15. Quote:Original post by stonemetal Quote:Original post by gbLinux nonsense. what in the world are you talking about? where did you pull that information from? what kind of hardware 10mil people who play WoW have? ...well, that hardware will do just fine. ah forget it, just remember that clients would NOT need to do the same work as server, much less indeed and it would be SIMULTANEOUS, parallel. How so? If you plan on shipping full state from place to place to avoid clients having to recalculate things then your bandwidth requirements have just sky rocketed. Oh and what stops me from sending out the I win packets? You need a server to be authoritative. Lan games it is ok not to since you obviously know everyone involved. my enquiry is theoretical, im interested in theoretically FASTEST solution first. security, connection quality and other local or temporary limits are not important to me at this point. all i want is side by side comparation of p2p vs server, given the same client distribution, with 4, 8, 16, 32, 100, 500.. clients. not to be "convinced" and "assured", i want analysis and numbers. sure, there will be problems as they scale, bandwidth becomes a problem for p2p, but as the future comes to past bandwidth may become cheap allowing p2p to scale much more elegantly. what is the average bandwidth cost with server-based games? what is the max number of clients current server-based games support? if today servers can't do 100 clients at all, then no one should object how it would be too bandwidth costly for p2p to do so, in my book that's a still a win for p2p. if not for today than for the happy children of the future. for now, i'm happy if p2p beats server with 8 to 16 user setup.