[.net] C# Asynchronous Sockets

Started by
6 comments, last by dmreichard 12 years, 10 months ago
Hey all,

I was wondering what the preferred methods are for thread safety when working with the asynchronous socket/stream methods; more specifically what is the "right" way to block all other threads AFTER the EndRead method call returns so that a client's OnReceiveData() method doesn't clash with the main server thread and other client threads?

Thank you for your time,
David
Advertisement
If you'd provide more detail on the general architecture (how many threads, what their general role is, how they communicate), I can be more help.

What you're probably looking for are Event Wait Handles. Anyone working with threads in .NET should read that entire ebook.
Anthony Umfer
I think what you are looking for is using the lock keyword in c#. This will provide you with thread safety but you have to be careful because you can run into deadlocks.

Remember to mark someones post as helpful if you found it so.

Journal:

http://www.gamedev.net/blog/908-xxchesters-blog/

Portfolio:

http://www.BrandonMcCulligh.ca

Company:

www.gwnp.ca


If you'd provide more detail on the general architecture (how many threads, what their general role is, how they communicate), I can be more help.

What you're probably looking for are Event Wait Handles. Anyone working with threads in .NET should read that entire ebook.


Absolutely!

Basically, the main server thread is in a loop that first checks for any pending connections, and if there are some it accepts them as a TcpClient, passing this into the constructor of a custom class named "Client"
Client contains an OnRead(IAsyncResult ar) method which is passed as a delegate into the TcpClient's network stream's BeginRead() method.

OnRead() calls the stream's BeginRead() method at the end to continuously grab new incoming data. Effectively, there is one server thread plus however many clients which each have a thread receiving data with the NetworkStream's BeginRead() method.

Now, assuming I'm doing this correctly up to this point; how would I ensure that when data IS available(upon EndRead() returning), the client's OnRead() method can block all other threads UNTIL it is finished processing the incoming packet (as this method can and will modify data in the global model that the main server thread also has access to).

Hope this helps, and thank you both for your help thus far!
Ok, that's what I thought. What you can do here is buffer these packets.

Let's say that your main loop is in a class called Server, and that each Client has a reference to this. Then, in your OnRead method, after you've processed the packet, you would add it to a list to be handled by the server.

Client
void OnRead(IAsyncResult r) {
Packet p = // process packet
server.Receive(p);
// read again
}


Server
public void Receive(Packet p) {
lock (packets) {
packets.Enqueue(p);
}
}

void YourMainLoop() {
lock (packets) {
while (packets.Count > 0) {
Packet p = packets.Dequeue();
// handle packet
}
}
// your code
}

Queue<Packet> packets = new Queue<Packet>();


The lock (packets) statements ensure that only 1 thread at a time can Enqueue or Dequeue.
Anthony Umfer

Ok, that's what I thought. What you can do here is buffer these packets.

Let's say that your main loop is in a class called Server, and that each Client has a reference to this. Then, in your OnRead method, after you've processed the packet, you would add it to a list to be handled by the server.

Client
void OnRead(IAsyncResult r) {
Packet p = // process packet
server.Receive(p);
// read again
}


Server
public void Receive(Packet p) {
lock (packets) {
packets.Enqueue(p);
}
}

void YourMainLoop() {
lock (packets) {
while (packets.Count > 0) {
Packet p = packets.Dequeue();
// handle packet
}
}
// your code
}

Queue<Packet> packets = new Queue<Packet>();


The lock (packets) statements ensure that only 1 thread at a time can Enqueue or Dequeue.


This is exactly what I was looking for. Thank you very much!

[quote name='typedef struct' timestamp='1307399703' post='4820278']
Ok, that's what I thought. What you can do here is buffer these packets.

Let's say that your main loop is in a class called Server, and that each Client has a reference to this. Then, in your OnRead method, after you've processed the packet, you would add it to a list to be handled by the server.

Client
void OnRead(IAsyncResult r) {
Packet p = // process packet
server.Receive(p);
// read again
}


Server
public void Receive(Packet p) {
lock (packets) {
packets.Enqueue(p);
}
}

void YourMainLoop() {
lock (packets) {
while (packets.Count > 0) {
Packet p = packets.Dequeue();
// handle packet
}
}
// your code
}

Queue<Packet> packets = new Queue<Packet>();


The lock (packets) statements ensure that only 1 thread at a time can Enqueue or Dequeue.


This is exactly what I was looking for. Thank you very much!
[/quote]

No problem. Now the important thing with locks is to have as few of them as possible, and to hold them for as little time as possible. So, where I said "handle packet" in the above code, you really don't want to be doing any heavy lifting. So to get around doing that, just add another layer of caching. "All programming is an exercise in caching." -- Terje Mathisen

void YourMainLoop() {
Queue<Packet> newPackets;
lock (packets) {
newPackets = packets.Clone();
packets.Clear();
}
while (newPackets.Count > 0) {
Packet p = newPackets.Dequeue();
// handle packet
}
// your code
}


So you add a slight overhead in that you're creating a copy of the list and handling that, but you're also holding the lock for as little time as possible, which is far more important when multithreading.
Anthony Umfer

[quote name='dmreichard' timestamp='1307400172' post='4820280']
[quote name='typedef struct' timestamp='1307399703' post='4820278']
Ok, that's what I thought. What you can do here is buffer these packets.

Let's say that your main loop is in a class called Server, and that each Client has a reference to this. Then, in your OnRead method, after you've processed the packet, you would add it to a list to be handled by the server.

Client
void OnRead(IAsyncResult r) {
Packet p = // process packet
server.Receive(p);
// read again
}


Server
public void Receive(Packet p) {
lock (packets) {
packets.Enqueue(p);
}
}

void YourMainLoop() {
lock (packets) {
while (packets.Count > 0) {
Packet p = packets.Dequeue();
// handle packet
}
}
// your code
}

Queue<Packet> packets = new Queue<Packet>();


The lock (packets) statements ensure that only 1 thread at a time can Enqueue or Dequeue.


This is exactly what I was looking for. Thank you very much!
[/quote]

No problem. Now the important thing with locks is to have as few of them as possible, and to hold them for as little time as possible. So, where I said "handle packet" in the above code, you really don't want to be doing any heavy lifting. So to get around doing that, just add another layer of caching. "All programming is an exercise in caching." -- Terje Mathisen

void YourMainLoop() {
Queue<Packet> newPackets;
lock (packets) {
newPackets = packets.Clone();
packets.Clear();
}
while (newPackets.Count > 0) {
Packet p = newPackets.Dequeue();
// handle packet
}
// your code
}


So you add a slight overhead in that you're creating a copy of the list and handling that, but you're also holding the lock for as little time as possible, which is far more important when multithreading.
[/quote]



Thank you for the additional comment, that is very helpful because I have little knowledge of managing multiple threads at the moment. I was always a select() kind of guy. :P

This topic is closed to new replies.

Advertisement