Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

Guest Anonymous Poster

UDP programming

This topic is 6874 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Advertisement
Aye you need to be more specific. But i think under real world circumstances your always gonna loss packets somewhere. Be it the lines the routers a bad server whatever. Though reducing the loss is possible.

Share this post


Link to post
Share on other sites
I believe that packet size will be dictated by you MTU setting, which by default on Win98 is 576.

There are many factors that need to be taken into account.

If most of your 'logic' code responsibility lies on the server then packets coming from a client to a server would not be as important as packets from the server to a client. In this instance, the server is always 'right'.

Most packets that you send do not need to be reliable. If you want every packet to have guaranteed arrival then why are you using UDP/IP? You might as well switch to TCP/IP instead of trying to re-invent the 'wheel'.

Share this post


Link to post
Share on other sites
I would like to disagree with the above post. I would go as far as to make the statement that UDP packetloss has been the cause of many deaths I have endured on Sony's EverQuest. I fundamentally disagree with their choice of UDP over TCP. If the server is always right in your example then the server reacts to things that may or may not be displayed on a client machine. As the number of things not displayed increases, so does the likelyhood that the server will react to an event the user doesn't know about.

For example, I am 5th level in EQ. I can kill two Orc Pawns at once maybe three with luck. Anyway I am playing the game and I approach the lone orc on the hill. Turns out that as soon as I get close the server syncs up with me and there are three more orcs on the hill, all of which begin to attack me. Since the server was always right here I suffered a death I could have avoided.

The more I think about this as a customer of EQ, I think wait a second, the server has to be right to properly model the world and control the environment, but you know the client has to be right as well so players can make decisions. Saying the server is always right in this case is akin to saying the customer is never right. You and I both know this is wrong as the customer is always right. Seems to me that in order to deliver the product you advertise as a networked game, you have to ensure that both sides of the game are correct. Forget UDP for netwroked games, it doesn't make business sense.

I plan on using TCP to do every bit of my game. I will focus my optimization techniques elsewhere to gain performance.

Kressilac

------------------
Derek Licciardi

Share this post


Link to post
Share on other sites
Packet size, under Berkeley sockets and most other socket representations is based on the number of bytes specified in the send operation. The IP stack may then fragment the packet, which can be based on the MTU. Other APIs may not support this one-send-one-packet model. (XTI may not)

Creating reliable UDP packets may be reinventing the wheel, but TCP has a lot of overhead you may not need. For example if reliable delivery of packets is necessary, but order of packets is unimportant, there may be a performance advantage in implementing the checks in the application.

The following relates to Kressilac's statements regarding Everquest. You may want to ignore it because A) it's long, B) it doesn't have the nicest tone.

It's naive to believe that using TCP over UDP would improve synchronization.

First off let's talk about latency (or lag). It is intuitivly obvious that latency would have an adverse affect on synchronization. Latency has two components in computer terms: computational latency and network latency. Latency in a network with infinite bandwidth is a function of the speed of light. Given that the Internet doesn't have unlimited bandwidth, latency is a direct function of bandwidth and data transfered. So if you increase the amount of information sent, then latency worsens. (Computational latency of intermediary routers is subsumed into bandwidth.)

In UDP packet is sent with an 8 byte header plus data. In TCP a packet is sent with a minimum 40 byte header. (Remember we're talking about thousands of packets per second so these bytes add up.) And then we have ACK messages flying around, which have their own IP headers attached, which eats up even more bandwidth. Just in switching from UDP to TCP we've increased the amount of data we're throwing around. In a perfect network, one without packet-loss and a reasonable packet size, TCP has only about a 1% overhead as compared with UDP. With even mild (3%) packet loss, computational overhead increases by near 400%. Plus those packets are re-sent which increases data flow. And under TCP even received packets can be resent, which further worsens the issue.

Let's focus more now on the concept of packet loss. Now take the example of Derek spotting a lone orc on a hill at time 0. Between time 0 and 0.5 sun spots disrupt communications between Derek and the server. At time 0.6 Derek decides to walk up the hill and attack the orc. At time 0.7 the message gets to the server that there was a black hole between the server and Derek, and the server resends the packets back to Derek. At time 0.8 Derek gets up the hill and the packets get to Derek, and the fact that three more orcs spawned at time 0.3 registers on the client. Derek dies. Notice this was a TCP situation. The server resent the packets that were lost. Did this change the fact that the information was delayed in getting to the client? No. Same result. TCP CANNOT guarantee that packets are not lost. It guarantees that a copy of the data sent will eventually get to the receiver in the order it was sent. You can't blame the fact that the message was delayed by 0.5 time units on UDP.

Ok, you say, then the server should make sure that all the clients got the update at time 0.3. Now instead of a real-time game, you've got a turn-based game. And given the quality of internet connections out there, a turn-based game with really long turns.

"Saying the server is always right in this case is akin to saying the customer is never right."

You need to consider that there are multiple clients that need to be synchronized. In any game where you have player-player interaction, you need an impartial decision maker, in this case the server. Otherwise Joe hacker can tell his client that everyone else has died. If the customer is always right in this case, well everyone else just died. Maybe you don't allow the client to make decisions like that, but the client can still break rules pretty bad otherwise. For example the UO hack that allowed players to attack anyone that had ever seen even though they were many screens away.

"Forget UDP for netwroked games, it doesn't make business sense."

If it doesn't make business sense then why do the businesses use it? UO, EQ both use it. Heck Asheron's Call uses it. If it was such a bad protocol, then why did they follow the pack?

In conclusion:
You can't use TCP as a cure-all for your networking woes.

I'm not saying that UDP is better than TCP, but if you rule out UDP as a blanket statement, you've uselessly crippled yourself.

Share this post


Link to post
Share on other sites
Just some clarifications on an otherwise well written response...

> In UDP packet is sent with an 8 byte header plus data. In TCP a packet
> is sent with a minimum 40 byte header

Not quite. The IP header is 24 bytes, then UDP adds 8 and TCP adds 16. Also, many modems allow "TCP header compression", but I've never heard of "UDP header compression".

> Remember we're talking about thousands of packets per second so these bytes add up.

Well, thousands of packets is a high unless you have hundreds of players.

> And then we have ACK messages flying around, [...] And under TCP even
> received packets can be resent, which further worsens the issue.

Well, if you want to add reliability to UDP, you'd have to duplicate these features.

I've heard it argued that you can't beat TCP because it was designed by a team of PhDs, but that's just hogwash. The real "problem" with TCP is that it was designed for streams of data, such as file transfer -- it was optimized for bandwidth. They made tradeoffs (flow control, Nagle's algorithm) that help sustained data transfer but aren't well suited to realtime games.

It's possible to beat TCP with UDP if you have specific needs *and* are know how to implement better flow control and retry algorithms. If you don't know how, then you are more likely to ruin your performance than improve it.

> You can't use TCP as a cure-all for your networking woes.

Agreed!

[This message has been edited by fprefect (edited December 19, 1999).]

Share this post


Link to post
Share on other sites
>> In UDP packet is sent with an 8 byte header plus data. In TCP a packet
>> is sent with a minimum 40 byte header
>
>Not quite. The IP header is 24 bytes, then UDP adds 8 and TCP adds 16. Also, many modems
>allow "TCP header compression",
>but I've never heard of "UDP header compression".
Oops, divided bits by 4 instead of 8 to get bytes in the TCP header.
Actually I still get a TCP header having a minimum size of 20 bytes. And an IP header with 20 bytes. Are you including IP options?

TCP header compression only applies to the data link between the modem and the terminal server. TCP header compression is a function of SLIP or PPP. Once the packet gets into the "Internet cloud," then the header gets expanded. RFC 1332 specifies both IP and TCP header compression for PPP. I think SLIP only has TCP header compression, but I can't find the RFC right now.

If you're wondering why it doesn't stay compressed, try imagining the routers on either side of an OC-48 trying to de-compress every IP packet that comes their way. The CPUs, which can barely keep up with bandwidth as it is, would be overwhelmed.

> The real "problem" with TCP is that it was designed for streams of data, such as file
> transfer -- it was optimized for bandwidth. They made tradeoffs (flow control, Nagle's
> algorithm) that help sustained data transfer but aren't well suited to realtime games.
Right on the nose! TCP is a one size fit all solution that may or may not work for your game. And with some games enough of the same variables get updated often enough that perhaps some degree of packet loss is acceptable. Even if Packet A is lost, Packet B later gives the new information for everything that Packet A said. So rather than wait for Packet A to be resent and then process both Packet A and Packet B, why not just process Packet B?

Of course this is an over-simplification. You might have some data that doesn't have this update-often property. In those cases you might consider a opening a TCP/IP connection for "critical data" and using UDP/IP for "non critical data". Many distributed systems in the enterprise have this kind of network architecture.

Example: Simple simulation of two people in a 2d room, peer-to-peer connection. The two people can move around the room and say things. Maybe you want to guarantee that people will hear each other, but it's not important of their relative position. So everytime one player changes position he wraps his new position in a UDP packet and sends it to the other player. Once in a while the UDP packet will get dropped, but that's ok because the player is likely to move again. But when a player says something, he sends the data over a TCP connection so that parts of conversation won't be dropped.

>> Remember we're talking about thousands of packets per second so these bytes add up.
>Well, thousands of packets is a high unless you have hundreds of players.
Actually I was refering specifically to Everquest which has around 1500 players per server at peak times. So one or two packets per player per second seems pretty conservative. My sniffer seems to indicate about 5 to 10 per second. I'm usually player 1501.

Share this post


Link to post
Share on other sites
I stand corrected. Thank you for your elegant response. Learn something new everyday, though I would still like to find a better solution. Maybe IPv6 will bring anlong more protocol stability. I hope so.

Kressilac


------------------
Derek Licciardi

[This message has been edited by kressilac (edited December 21, 1999).]

Share this post


Link to post
Share on other sites
I'd like to expand upon that analogy of the customer being always right.

Customers are not always right. It is a misconception that they are.

Say, for example, we have a customer named Mr.Client. During a negotiation between Mr.Server and Mr.Client the latter trys to make a transaction with false data, perhaps a falsified credit card (aka "hacked"-game packet). Is Mr.Client still correct? Even though Mr.Client adamantly claims the information is valid, should Mr.Server accept the credit card (game packet) as good? Hopefully most businesses would decline to accept the card.

Most businesses, while stressing that the client is *usually* right, do make the point that clients are not *always* right. Trust me on this one, I worked in high volume computer sales and technical support for 2 years (long enough thank you).

Playing a game with mutiple clients and one server forces the question, who is right? If client 1's data does not match client 2's data and neither matches the server - who is right? The correct answer is the server.

By the very nature of client/server technology *business rules* reside on the server. This holds true for almost all distributed applications. By centralizing the logic of an application you can enforce security and validation. Clients are responsible for handling the presentation of the data received from the server. In the context of games, this translates into the player positions, sounds, models etc. Bandwidth restrictions can be minimized further by having client-side effects, such as explosions, done only on the client. No information needs to be transmitted to the server (frame position data etc). While this means not everyone may see the explosion at the same frame position, the implications do not adversely affect gameplay.

[This message has been edited by JDudgeon (edited December 21, 1999).]

[This message has been edited by JDudgeon (edited December 23, 1999).]

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!