0


I have searched but I could not find the following:
Process1 transmits data over TCP socket. The code that does the transmission is (pseudocode)

//Section 1 write(sock,data,len);//any language.Just write data //Section 2 

Process1 after the write could continue in section 2, but this does not mean that data has been transmitted. TCP could have buffered the data for later transmission.
Now Process2 is running concurrently with Process1. Both processes try to send data concurrently. I.e. both will have code as above.
Question1: If both processes write data to TCP socket simultaneously how will the data be eventually transmitted over the wire by IP/OS?
a) All data of Process1 followed by all data of Process2 (or reverse) i.e. some FIFO order?
or
b) Data from Process1 & Process2 would be multiplexed by IP layer (or OS) over the wire and would be send "concurrently"?
Question2: If e.g. I added a delay, would I be sure that data from the 2 processes were send serially over the wire (e.g. all data of Process1 followed by all data of Process2)?
UPDATE:
Process1 and Process2 are not parent child. Also they are working on different sockets
Thanks

4 Answers 4

4

Hmm, are you are talking about single socket shared by two processes (like parent and child)? In such a case the data will be buffered in order of output system calls (write(2)s).

If, which is more likely, you are talking about two unrelated TCP sockets in two processes then there's no guarantee of any order in which the data will hit the wire. The reason for that is sockets might be connected to remote points that consume data with different speeds. TCP flow control then makes sure that fast sender does not overwhelm slow receiver.

Sign up to request clarification or add additional context in comments.

6 Comments

But the order of output system calls must be explicitly controlled, or it becomes indeterminate.
Yes, but that's a whole different discussion.
@Nikolai N Fetissov:So the behavior of OS system calls is not deterministic to analyze?
TCP stack is a very complex beast. From the application point of view you can just assume it's not deterministic :)
@Nikolai N Fetissov: But is TCP doing this or IP? TCP passes data to IP. So IP is also black-box?
|
2

Answer 1: the order is unspecified, at least on the sockets-supporting OS's that I've seen. Processes 1 & 2 should be designed to cooperate, e.g. by sharing a lock/mutex on the socket.

Answer 2: not if you mean just a fixed-time delay. Instead, have process 1 give a go-ahead signal to process 2, indicating that process 1 has done sending. Use pipes, local sockets, signals, shared memory or whatever your operating system provides in terms of interprocess communication. Only send the signal after "flushing" the socket (which isn't actually flushing).

5 Comments

@larsmans: So write, flush, signal.Flushing is guaranteed to transmit all data to destination, or is like e.g. in C++ I/O operations? Only a signal to OS to flush. Not guarantee to happen yet
That might depend on your OS, but in any case, the data is in the kernel buffer before process 2 starts writing, so that will likely push the data over the wire.
@user384706, no, many don't, they just wait for the OS to send data. Enabling TCP_NODELAY (prevent some buffering) is sometimes done to improve performance, though it doesn't always help.
@user384706, please read the post I linked to, it explains that there is no flush operation on sockets.
@larsmans:I read the link. What do you mean "it explains that there is no flush operation on sockets"? It says that using flush, data in buffer not yet send, are send. How TCP_NODELAY fits in? It does not mention this
0

A TCP socket is identified by a tuple that usually is at least (source IP, source port, destination IP, destination port). Different sockets have different identifying tuples.

Now, if you are using the same socket on two processes, it depends on the order of the write(2) calls. But, you should take into account that write(2) may not consume all the data you've passed to it, the send buffer may be full, causing a short write (write()'ing less than asked for, and returning the number of bytes written as return value), causing write() to block/sleep until there is some buffer space, or causing write() to return an EAGAIN/EWOULDBLOCK error (for non-blocking sockets).

4 Comments

I am talking about different socket per process. You mean the process could block, the second start writing and occur the interleaving?
@user384706: If they are different sockets, it doesn't really matter. Every TCP segment contains the source and destination ports, so even if your TCP/IP stack was sending 1 byte of data at a time there is no possible confussion.
I know that they can not be mixed. My question is what is the message flow I will see over the wire.Multiplexing or serial transmission?
@user384706: You never really know. Depends on lots of things, mainly network conditions and OS's TCP/IP stack.
0
  1. write() is atomic; ditto send() and friends. Whichever one executed first would transmit all its data while the other one blocks.
  2. The delay is unnecessary, see (1).

EDIT: but if as I now see you are talking about different sockets per process your question seems pointless. There is no way for an application to know how TCP used the network so what does it matter? TCP will transmit in packets of up to an MTU each in whatever order it sees fit.

2 Comments

You are saying that if Process1 calls write at e.g. timestamp 1:00:00 and Process2 calls write 2 seconds later, it is still not certain if the packets will be multiplexed or the transmission of Process1 would have finished?
Obviously there is a point with short transmissions or long intervals where the question becomes ridiculous. I still don't know why it matters to you. In a sense the question is inherently ridiculous.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.