Followers 0

# My sockets are as fast as pipes..

## 8 posts in this topic

Hello,

For an IPC I used sockets for a long time. That's convenient and relatively fast.

Then I thought, at least when the 2 processes are on the same machine, that I could gain speed by using pipes instead. My idea of pipes was that they were using shared memory. But it turns out that my pipe communication is as slow as my socket communication. How can that be? Am I doing something wrong?

Currently, I have implemented the pipes on Windows, but if I can obtain a speed increase, I also want to do this on Linux.

Here my pipe implementation, client side:

// connect to an existing pipe:

// Write something:
DWORD bytesWritten;
WriteFile(pipe,buffer,bufferSize,&bytesWritten,0)

ReadFile(pipe,buffer,bufferSize,&bytesRead,0)

And here my pipe implementation, server side:

// Create the pipe:

// Connect to client:
if (_pipe!=INVALID_HANDLE_VALUE)
ConnectNamedPipe(_pipe,NULL);

// Read and write from client is implemented in the same was as for the client read/write routines

Anything I am doing wrong?

1

##### Share on other sites

So if I understood correctly, using shared memory is the solution. But still no guarantee for a speed increase.

I'll try my hands on this, thanks again

2

##### Share on other sites

So if I understood correctly, using shared memory is the solution. But still no guarantee for a speed increase.
I'll try my hands on this, thanks again

Solution to what? Your original question was stating that you were trying to get something faster than standard sockets. The response was that in general you're going to get the same speed or better than pipes just using sockets.

People did mention using shared memory -might- be a bit faster, but really we're doing a crapshoot here and if you're having problems they might be much more easily fixed by simple adjustments to your netcode.

Right now you're sort of asking us how to optimize some massive copying for loop without any metric of why its being slow or if its even slow at all. Edited by Satharis
1

##### Share on other sites
So if I understood correctly, using shared memory is the solution. But still no guarantee for a speed increase

Shared memory is certainly a guarantee for increased speed (unless done wrong), since it requires fewer copies and fewer user/kernel switches. Ideally, you can implement it without any copying, and with no kernel calls at all, though that is not possible always and in every situation.

It may however not give you a noticeable difference, since you noticed that sockets are already the same perceived speed as pipes, so quite possible this is not your bottleneck.

While it is true that communication over local sockets never touches the network card (not just on Windows, but on pretty much every serious operating system), the traffic still goes through the TCP/IP stack, which includes chopping the stream into packets, checksumming, and reassembling. That's at least two extra copies and two extra passes over the data, compared to a pipe. This is necessary because socket communication, even local, goes through the packet filter / firewall and can thus be re-routed, traffic-shaped, and accounted. That wouldn't be possible if the socket didn't do the complete TCP dance.

Insofar, a socket must be somewhat slower than a pipe, which is merely a memcpy to an OS-owned buffer and back. If it isn't slower (i.e. your program runs the same speed with either of them), then you are likely looking at the wrong thing to fix.

Edited by samoth
1

##### Share on other sites

While it is true that communication over local sockets never touches the network card (not just on Windows, but on pretty much every serious operating system), the traffic still goes through the TCP/IP stack, which includes chopping the stream into packets, checksumming, and reassembling. That's at least two extra copies and two extra passes over the data, compared to a pipe. This is necessary because socket communication, even local, goes through the packet filter / firewall and can thus be re-routed, traffic-shaped, and accounted. That wouldn't be possible if the socket didn't do the complete TCP dance.

Hold on there. Once the endpoints are both established as local clients of a stream-oriented (TCP) connection with no additional filters between them and none of the TCP options that change the basic behavior, the OS can essentially drop in the same machinery used for a regular pipe with absolutely no observable side effects aside from the improved efficiency. The OS can still support all the features of TCP on localhost connections by simply disabling the optimization in cases where the configuration requires it. TCP _connection_ will be slower than many other methods due to the need to check all the firewall rules and whatnot, but already connected endpoints are another story.

Whether your OS supports this or not is the question. You may well end up needing to support 6 different IPC mechanisms for 6 different OSes. Even shared memory may not be available on every platform you want to support, necessitating that you abstract away your IPC communication to allow for platform-specific communication channels. This is partly why I roll my eyes at all the Linux nerds (I roll my eyes at them a lot; I used to be a pretty hardcore one myself, so I earned the right ) who decry game programmers because we don't "just write portable code" from the beginning. Even in the world of POSIX-like OSes, the things that work best on Linux don't work well (or at all) on BSD/OSX/QNX/Solaris/etc. Hence the need for libraries like libev/libevent to abstract over the myriad of options for something as basic as I/O multiplexing.
2

##### Share on other sites
Hold on there. Once the endpoints are both established as local clients of a stream-oriented (TCP) connection with no additional filters between them and none of the TCP options that change the basic behavior, the OS can essentially drop in the same machinery used for a regular pipe with absolutely no observable side effects aside from the improved efficiency.

It could, in theory. Which would however mean that if you change packet filter/forwarding rules (which you can do at any time), the OS would have to go through every existing socket and re-validate it, re-adjusting all the special paths.

Since the very computers that frequently change packet filter rules are the same type of computers that have many live connections open, that would be quite expensive.

Further, IP traffic is always accounted (such as e.g. packets sent), not just when you enable it. Try typing netstat -s in a CMD window (incidentially, and for once, this command works exactly identical to its Unix counterpart under Windows!). The information displayed there couldn't be generated in a reliable manner if no actual packets were generated whenever Windows just doesn't feel like it.

Edited by samoth
1

##### Share on other sites

When I have needed ultra-high bandwidth or ultra-low latency on Windows, I use shared memory, knowing the solution is not portable. In all other cases I use sockets because they are:
* Usually plenty fast
* Completely portable

* can be put in select(), thus allowing to wake a IO thread. Windows can do that anyway using WaitForMultipleObjects() but I'm not aware of a *nix equivalent. Suggestions welcome!

0

## Create an account

Register a new account