Archived

This topic is now archived and is closed to further replies.

High-bandwidth IPC

This topic is 5246 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

If one process opens a socket to 127.0.0.1, and another process picks it up, what kind of bandwidth limits might there be? We''ve got to send giant amounts of data back and forth at high speeds. I''d assume that it''s just a queue in memory so it''s as fast as the processes can send/recieve them, but I want to make sure so we won''t waste time trying to make this work. I want to use sockets because at some point we could change the program to communicate between two systems. Oh, BTW, it''s running on a dual proc P4 Xeon system.

Share this post


Link to post
Share on other sites
It''s a massive waste of resources, but if it is too slow on the same system, it''s never going to work across the wire.

The preferred IPC is a dual memory-mapped region - on NT you can use memory-mapped files to accomplish this. On Linux I''m not sure how (yet, I need to do something similar), but it really wouldn''t surprise me if there is no mechanism to create a region that is memory-mapped into two processes (from user-mode code).

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Using sockets with loopback address (localhost, i.e. 127.0.0.1), typical includes some protocol overhead (i.e. TCP). It is unclear what optimizations are made within a protocol stack if loopback address is used.

If you eventually want a distributed system, then use loopback.

Otherwise, consider using Unix domain sockets, which incur no protocol overhead. This effectively places socket read/write semantics around a chunk of shared memory.

As another poster suggested, memory mapped files are an option. On Linux, a memory mapped file is accessed via "mmap". Pretty simple. Another option to shared memory is System V shared memory (shmget), be warned though that shared memory such as this is a system wide resource and often doesn''t get cleaned up if you program crashes or doesn''t clean it up (mmapped files however will get cleaned up).

Share this post


Link to post
Share on other sites
quote:
Original post by Magmai Kai Holmlor
It''s a massive waste of resources, but if it is too slow on the same system, it''s never going to work across the wire.

The preferred IPC is a dual memory-mapped region - on NT you can use memory-mapped files to accomplish this. On Linux I''m not sure how (yet, I need to do something similar), but it really wouldn''t surprise me if there is no mechanism to create a region that is memory-mapped into two processes (from user-mode code).



Hmmmm...maybe I didn''t really put it the right way. The system needs to be fairly flexible, so that if, for example, I end up only needing maybe average 20MB/s data flow, then I could distribute that data over a high-speed network(i.e. gigabit) and free up the sending system''s processing load. However, if we get unlucky and the data flow turns out to be more like 150MB/s, we would need to switch the system around a bit so that both processes are running on the same (dual-proc) machine.

If we knew the total amount of data now there wouldn''t be a problem, but as the project goes forward and the design changes(as it always seems to do), I want to make a system that works now, when we have time, and not later, when we''re going to be worrying about debugging & optimization and all that. Once the system is in place, the data flow likely won''t change, so we can fix the system at compile-time.

I''ll check out both UNIX domain sockets and memory-mapped regions. UNIX sockets might be the way we go because then it''ll be easier to switch to TCP and/or UDP later. If it''s just a FIFO in memory then it should be pretty fast. As for memory, well, I''d have to work with one of the other programmers for that, I''m not too experienced with things like race conditions and stuff like that, but it might be required.

Or, I can bite the bullet and write a small library which does both so we''re safe either way, but that seems like a lot of time investment for not much.

I guess my question, then, was can two processes communicate with each other at very high rates?

Share this post


Link to post
Share on other sites
Unix domain sockets I reccomend. X uses the same thing, and despite what some crackheads claim, it doesn''t slow down the protocol.

Just out of curiousity, what are you developing that could pump out 150MB/s ?

Share this post


Link to post
Share on other sites
What sort of unix is this? Virtually all Linux IPC mechanisms are fast, but shared memory segments are the most efficient for large amounts of data, of course. Unless the memory needs to be copied from one process to another anyway.

Information on the various options is available here.

Share this post


Link to post
Share on other sites