qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH] slirp: Propagate host TCP RST packet to the guest after socket disconnected
@ 2018-08-30 15:57 Gavin Grant
  2018-08-30 15:57 ` Gavin Grant
  0 siblings, 1 reply; 4+ messages in thread
From: Gavin Grant @ 2018-08-30 15:57 UTC (permalink / raw)
  To: samuel.thibault, jan.kiszka, qemu-devel

[PATCH] slirp: Propagate host TCP RST packet to the guest after socket disconnected

Please bear with me, since this is my first correspondence with the qemu-devel
mailing list (or any public devel mailing list for that matter!).

I believe I have witnessed a discrepancy between SLIRP TCP and normal UNIX/Linux
TCP behaviour relating to the sending of RST packet(s) after a socket has been
closed gracefully by a host client application, then the application
subsequently exits. When the host socket is closed gracefully, it is in a
"half-closed" state. i.e. The socket can no longer be read from, but it can be
written to and will wait in FIN_WAIT_1 & FIN_WAIT_2 states until the remote peer
sends a FIN. This is fine. But when the application exits, the host socket is no
longer connected. I would expect subsequent slirp_send() calls to return an EPIPE
error, causing RST packet(s) to be sent to the server, causing it to close its
socket.

However, unlike soread() in 27d92ebc5ed1bb0b518d0ebc4c609182ad20a799, sowrite()
does not call tcp_drop() for specific errno's, so the server socket will remain
open. sbappend() does not implement error handling, as it relies on errors to
be handled in soread().

My proposed patch attempts to address this by checking for closed & disconnected
socket status in soread(). Forgive me if my logic and proposal are foolish!

The application(s) that I observed this behaviour from were libiio test
programs (https://github.com/analogdevicesinc/libiio/tree/master/tests), which
connect to an iiod daemon process running on the server, to interface with the
Linux Industrial Input/Output (IIO) subsystem. I noticed that if iiod is running
on a QEMU server with SLIRP, then iiod does not notice when client applications
close their connection and exit. This can result in multiple threads open on the
server, with their sockets stuck in the CLOSE_WAIT state. Once several of these
threads exist, my QEMU system becomes extremely slow and unusable.

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2018-10-07 17:52 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-08-30 15:57 [Qemu-devel] [PATCH] slirp: Propagate host TCP RST packet to the guest after socket disconnected Gavin Grant
2018-08-30 15:57 ` Gavin Grant
2018-08-30 16:02   ` Samuel Thibault
2018-10-07 17:52   ` Samuel Thibault

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).