The considered application (server) has a number of clients connected. It processes each message from the client and sends two output messages. The output messages are generated sequencially and written to the socket. I observe that occassionnaly the second output message is delayed by a big time for a client. I'm trying to understand the reason for this.
Below is the filtered TCP flow between the application and the client captured from the client side.
- 14908 is the input message from client. 14910 and 15337 are the two ouput messages
- 14910 is not delayed. But 15337 is delayed around 40ms.
- As I can see, the 15337 packet is not sent until the ack 15336 is received.
The application sends a packet to TCP layer if the socket does not return EWOULDBLOCK or EAGAIN . I'm assuming the socket is not blocked because the window published in #14908 is 1424 and #15336(len=229) shouldn't send it to EWOULDBLOCK.
So can you please help me understand what causes the TCP layer from delaying the sending of #15337 until the ack for #15336?
