c# sockets multi client/server implementation

Started by
16 comments, last by dantz 15 years, 4 months ago
I will be implementing a multi client/server game structure that uses TCP. When the connection of a client is established with the server and the exchange was done, can I just retain the connection until the client initiates to disconnect? Or just disconnect whenever a request/response was finished then establish a new connection again if it is needed? I am just concerned that if the connection will be continous it may be not a good choice because of network issues (bandwidth,security or anything) or client machine issues (memory,local resource or anything) How does multi player games are implemented with regards to connection? I hope anyone can help me on this.
Advertisement
For most games you will likely maintain a client connection for the time the client interacts with the server until disconnect is detected. This seems better than per-request connection handling like a web server.
Remember that you need to associate state data specific to each connected client.
While that can be worked around (session id), you will also need to send data to a client when he did not even request anything (e.g. when other players move).

You should however time out and close connections which are idle for too long (i.e. you do not receive anything from a client for some time). For this to work you must require the client to send alive/ping messages in given intervals whenever they do not send any other action for a given time.
Quote:Original post by jmp97
For most games you will likely maintain a client connection for the time the client interacts with the server until disconnect is detected. This seems better than per-request connection handling like a web server.
Remember that you need to associate state data specific to each connected client.
While that can be worked around (session id), you will also need to send data to a client when he did not even request anything (e.g. when other players move).

You should however time out and close connections which are idle for too long (i.e. you do not receive anything from a client for some time). For this to work you must require the client to send alive/ping messages in given intervals whenever they do not send any other action for a given time.


Thanks for the reply.
If that was the case, wouldn't it take too much resources on the server? And how about the network traffic,does it affect the connection of others? (Making the response for a player action to be slower)

Does it mean that I need to have a 1 thread per client?

TIA

It's no need to spawn the new thread for the new client, you can do all of that in one thread that handle all the client data. But it's some kind of pros-and-cons here between the two.

If you spawn the thread, it's clean and nice to do but with the limitation of thread can be spawned on such system (game server), and it will comsume more memory.
If you do it all in one thread, the time-sharing to give for all clients to update their stuff is concerned, and the algorithm there should be careful done.

Nevertheless if the game is not too large, I prefer using spawn the new threads due to it's likely to be implemented in short time.
Just keep doing!
Quote:Original post by haxpor
It's no need to spawn the new thread for the new client, you can do all of that in one thread that handle all the client data. But it's some kind of pros-and-cons here between the two.

If you spawn the thread, it's clean and nice to do but with the limitation of thread can be spawned on such system (game server), and it will comsume more memory.
If you do it all in one thread, the time-sharing to give for all clients to update their stuff is concerned, and the algorithm there should be careful done.

Nevertheless if the game is not too large, I prefer using spawn the new threads due to it's likely to be implemented in short time.


Ok thanks, that clarifies something.
But how about the network traffic? If there are too many clients(may be 200) playing(connected) at the same time,will the reponse' transfer rate will be affected? do i expect a slower response time here?

TIA
hi Kylotan

for my clients, it will have two kinds(both in same machine):
1)gameplay client - this client sends the player's move towards the game. it usually have information size that may range from 1mb and below and frequency is very high like every 500ms

2)normal client - this clients are the one doing some game update(graphics/sound)and other. File size may range from 100mb or more. Depends on the file but may be too large. Frequency may be every 10min

these clients may run simultaneously.
the machines may range from 20 upto 200(as of the moment) or maybe more..

Hope I clarified some details.

TIA
Are you saying you want a client to sometimes send 2 megabytes a second, or 100 megabytes every 10 minutes?
Quote:Original post by Kylotan
Are you saying you want a client to sometimes send 2 megabytes a second, or 100 megabytes every 10 minutes?


sorry, actually I just exaggerated the values for a maximum rough estimate,but honestly I have not considered the size of the message to send...it is really my first time to do network game programming and I am not very sure of how large can the data would go..
You need to get a handle on how much data you will need, first. You can't do capacity planning without it.

Given the fixed size buffers in the kernel, both for UDP and TCP, the maximum amount of data that a client can send to you equals the rate at which you read data from the client. If you make sure to not read more than, say, 3 kB/s from each client, at the application layer, then the buffering (for UDP) and windowing (for TCP) will make sure that not more than that is actually successfully delivered to you. Clients trying to send more will either see a lot of packet loss and latency (UDP), or will see their client-side send queues and latencies grow possibly unbounded (TCP).
enum Bool { True, False, FileNotFound };
Quote:Original post by hplus0603
You need to get a handle on how much data you will need, first. You can't do capacity planning without it.

Given the fixed size buffers in the kernel, both for UDP and TCP, the maximum amount of data that a client can send to you equals the rate at which you read data from the client. If you make sure to not read more than, say, 3 kB/s from each client, at the application layer, then the buffering (for UDP) and windowing (for TCP) will make sure that not more than that is actually successfully delivered to you. Clients trying to send more will either see a lot of packet loss and latency (UDP), or will see their client-side send queues and latencies grow possibly unbounded (TCP).


Ok I got it,thanks.

As of now the maximum size that I can have is 50kb of data. My 200 clients may simultaneously connect then send&read this. It may happen infinitely until the client trigger an end game or disconnect. My question is,what would be the best for my server:
1) keep the alive connection until client terminates?
2) client connects to the server every time it will need to send/receive a 50kb of data?

I really appreciate your replies

TIA

This topic is closed to new replies.

Advertisement