• Advertisement
Sign in to follow this  

[.net] C# Asynchronous Sockets

This topic is 2452 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey all,

I was wondering what the preferred methods are for thread safety when working with the asynchronous socket/stream methods; more specifically what is the "right" way to block all other threads AFTER the EndRead method call returns so that a client's OnReceiveData() method doesn't clash with the main server thread and other client threads?

Thank you for your time,
David

Share this post


Link to post
Share on other sites
Advertisement
If you'd provide more detail on the general architecture (how many threads, what their general role is, how they communicate), I can be more help.

What you're probably looking for are Event Wait Handles. Anyone working with threads in .NET should read that entire ebook.

Share this post


Link to post
Share on other sites
I think what you are looking for is using the lock keyword in c#. This will provide you with thread safety but you have to be careful because you can run into deadlocks.

Share this post


Link to post
Share on other sites

If you'd provide more detail on the general architecture (how many threads, what their general role is, how they communicate), I can be more help.

What you're probably looking for are Event Wait Handles. Anyone working with threads in .NET should read that entire ebook.


Absolutely!

Basically, the main server thread is in a loop that first checks for any pending connections, and if there are some it accepts them as a TcpClient, passing this into the constructor of a custom class named "Client"
Client contains an OnRead(IAsyncResult ar) method which is passed as a delegate into the TcpClient's network stream's BeginRead() method.

OnRead() calls the stream's BeginRead() method at the end to continuously grab new incoming data. Effectively, there is one server thread plus however many clients which each have a thread receiving data with the NetworkStream's BeginRead() method.

Now, assuming I'm doing this correctly up to this point; how would I ensure that when data IS available(upon EndRead() returning), the client's OnRead() method can block all other threads UNTIL it is finished processing the incoming packet (as this method can and will modify data in the global model that the main server thread also has access to).

Hope this helps, and thank you both for your help thus far!

Share this post


Link to post
Share on other sites
Ok, that's what I thought. What you can do here is buffer these packets.

Let's say that your main loop is in a class called Server, and that each Client has a reference to this. Then, in your OnRead method, after you've processed the packet, you would add it to a list to be handled by the server.

Client
void OnRead(IAsyncResult r) {
Packet p = // process packet
server.Receive(p);
// read again
}


Server
public void Receive(Packet p) {
lock (packets) {
packets.Enqueue(p);
}
}

void YourMainLoop() {
lock (packets) {
while (packets.Count > 0) {
Packet p = packets.Dequeue();
// handle packet
}
}
// your code
}

Queue<Packet> packets = new Queue<Packet>();


The lock (packets) statements ensure that only 1 thread at a time can Enqueue or Dequeue.

Share this post


Link to post
Share on other sites

Ok, that's what I thought. What you can do here is buffer these packets.

Let's say that your main loop is in a class called Server, and that each Client has a reference to this. Then, in your OnRead method, after you've processed the packet, you would add it to a list to be handled by the server.

Client
void OnRead(IAsyncResult r) {
Packet p = // process packet
server.Receive(p);
// read again
}


Server
public void Receive(Packet p) {
lock (packets) {
packets.Enqueue(p);
}
}

void YourMainLoop() {
lock (packets) {
while (packets.Count > 0) {
Packet p = packets.Dequeue();
// handle packet
}
}
// your code
}

Queue<Packet> packets = new Queue<Packet>();


The lock (packets) statements ensure that only 1 thread at a time can Enqueue or Dequeue.


This is exactly what I was looking for. Thank you very much!

Share this post


Link to post
Share on other sites

[quote name='typedef struct' timestamp='1307399703' post='4820278']
Ok, that's what I thought. What you can do here is buffer these packets.

Let's say that your main loop is in a class called Server, and that each Client has a reference to this. Then, in your OnRead method, after you've processed the packet, you would add it to a list to be handled by the server.

Client
void OnRead(IAsyncResult r) {
Packet p = // process packet
server.Receive(p);
// read again
}


Server
public void Receive(Packet p) {
lock (packets) {
packets.Enqueue(p);
}
}

void YourMainLoop() {
lock (packets) {
while (packets.Count > 0) {
Packet p = packets.Dequeue();
// handle packet
}
}
// your code
}

Queue<Packet> packets = new Queue<Packet>();


The lock (packets) statements ensure that only 1 thread at a time can Enqueue or Dequeue.


This is exactly what I was looking for. Thank you very much!
[/quote]

No problem. Now the important thing with locks is to have as few of them as possible, and to hold them for as little time as possible. So, where I said "handle packet" in the above code, you really don't want to be doing any heavy lifting. So to get around doing that, just add another layer of caching. "All programming is an exercise in caching." -- Terje Mathisen

void YourMainLoop() {
Queue<Packet> newPackets;
lock (packets) {
newPackets = packets.Clone();
packets.Clear();
}
while (newPackets.Count > 0) {
Packet p = newPackets.Dequeue();
// handle packet
}
// your code
}


So you add a slight overhead in that you're creating a copy of the list and handling that, but you're also holding the lock for as little time as possible, which is far more important when multithreading.

Share this post


Link to post
Share on other sites

[quote name='dmreichard' timestamp='1307400172' post='4820280']
[quote name='typedef struct' timestamp='1307399703' post='4820278']
Ok, that's what I thought. What you can do here is buffer these packets.

Let's say that your main loop is in a class called Server, and that each Client has a reference to this. Then, in your OnRead method, after you've processed the packet, you would add it to a list to be handled by the server.

Client
void OnRead(IAsyncResult r) {
Packet p = // process packet
server.Receive(p);
// read again
}


Server
public void Receive(Packet p) {
lock (packets) {
packets.Enqueue(p);
}
}

void YourMainLoop() {
lock (packets) {
while (packets.Count > 0) {
Packet p = packets.Dequeue();
// handle packet
}
}
// your code
}

Queue<Packet> packets = new Queue<Packet>();


The lock (packets) statements ensure that only 1 thread at a time can Enqueue or Dequeue.


This is exactly what I was looking for. Thank you very much!
[/quote]

No problem. Now the important thing with locks is to have as few of them as possible, and to hold them for as little time as possible. So, where I said "handle packet" in the above code, you really don't want to be doing any heavy lifting. So to get around doing that, just add another layer of caching. "All programming is an exercise in caching." -- Terje Mathisen

void YourMainLoop() {
Queue<Packet> newPackets;
lock (packets) {
newPackets = packets.Clone();
packets.Clear();
}
while (newPackets.Count > 0) {
Packet p = newPackets.Dequeue();
// handle packet
}
// your code
}


So you add a slight overhead in that you're creating a copy of the list and handling that, but you're also holding the lock for as little time as possible, which is far more important when multithreading.
[/quote]



Thank you for the additional comment, that is very helpful because I have little knowledge of managing multiple threads at the moment. I was always a select() kind of guy. :P

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement