From: Paolo Bonzini <pbonzini@redhat.com>
To: Klim Kireev <klim.kireev@virtuozzo.com>, qemu-devel@nongnu.org
Cc: marcandre.lureau@redhat.com, den@virtuozzo.com
Subject: Re: [Qemu-devel] [PATCH] chardev/char-socket: add POLLHUP handler
Date: Tue, 16 Jan 2018 18:25:49 +0100 [thread overview]
Message-ID: <71031eb9-06a0-21fe-dd89-4b49e23b97be@redhat.com> (raw)
In-Reply-To: <20180110131832.16623-1-klim.kireev@virtuozzo.com>
On 10/01/2018 14:18, Klim Kireev wrote:
> The following behavior was observed for QEMU configured by libvirt
> to use guest agent as usual for the guests without virtio-serial
> driver (Windows or the guest remaining in BIOS stage).
>
> In QEMU on first connect to listen character device socket
> the listen socket is removed from poll just after the accept().
> virtio_serial_guest_ready() returns 0 and the descriptor
> of the connected Unix socket is removed from poll and it will
> not be present in poll() until the guest will initialize the driver
> and change the state of the serial to "guest connected".
>
> In libvirt connect() to guest agent is performed on restart and
> is run under VM state lock. Connect() is blocking and can
> wait forever.
> In this case libvirt can not perform ANY operation on that VM.
>
> The bug can be easily reproduced this way:
>
> Terminal 1:
> qemu-system-x86_64 -m 512 -device pci-serial,chardev=serial1 -chardev socket,id=serial1,path=/tmp/console.sock,server,nowait
> (virtio-serial and isa-serial also fit)
>
> Terminal 2:
> minicom -D unix\#/tmp/console.sock
> (type something and pres enter)
> C-a x (to exit)
>
> Do 3 times:
> minicom -D unix\#/tmp/console.sock
> C-a x
>
> It needs 4 connections, because the first one is accepted by QEMU, then two are queued by
> the kernel, and the 4th blocks.
>
> The problem is that QEMU doesn't add a read watcher after succesful read
> until the guest device wants to acquire recieved data, so
> I propose to install a separate pullhup watcher regardless of
> whether the device waits for data or not. After closing the connection,
> the guest has a capability to read the data within timeout.
I don't understand the timeout part.
Apart from that, maybe the bug is in io_watch_poll_prepare, which needs
to _always_ set up a "G_IO_ERR | G_IO_HUP | G_IO_NVAL" watch. Only
G_IO_IN depends on iwp->fd_can_read(...) > 0.
So the change would start with something like this:
diff --git a/chardev/char-io.c b/chardev/char-io.c
index f81052481a..a5e65d4e7c 100644
--- a/chardev/char-io.c
+++ b/chardev/char-io.c
@@ -29,6 +29,7 @@ typedef struct IOWatchPoll {
QIOChannel *ioc;
GSource *src;
+ GIOCondition cond;
IOCanReadHandler *fd_can_read;
GSourceFunc fd_read;
@@ -41,25 +42,32 @@ static IOWatchPoll *io_watch_poll_from_source(GSource *source)
return container_of(source, IOWatchPoll, parent);
}
+static void io_watch_poll_destroy_source(IOWatchPoll *iwp)
+{
+ if (iwp->src) {
+ g_source_destroy(iwp->src);
+ g_source_unref(iwp->src);
+ iwp->src = NULL;
+ iwp->cond = 0;
+ }
+}
+
static gboolean io_watch_poll_prepare(GSource *source,
gint *timeout)
{
IOWatchPoll *iwp = io_watch_poll_from_source(source);
bool now_active = iwp->fd_can_read(iwp->opaque) > 0;
- bool was_active = iwp->src != NULL;
- if (was_active == now_active) {
- return FALSE;
+ GIOCondition cond = G_IO_ERR | G_IO_HUP | G_IO_NVAL;
+ if (now_active) {
+ cond |= G_IO_IN;
}
- if (now_active) {
- iwp->src = qio_channel_create_watch(
- iwp->ioc, G_IO_IN | G_IO_ERR | G_IO_HUP | G_IO_NVAL);
+ if (iwp->cond != cond) {
+ io_watch_poll_destroy_source(iwp);
+ iwp->cond = cond;
+ iwp->src = qio_channel_create_watch(iwp->ioc, cond);
g_source_set_callback(iwp->src, iwp->fd_read, iwp->opaque, NULL);
g_source_attach(iwp->src, iwp->context);
- } else {
- g_source_destroy(iwp->src);
- g_source_unref(iwp->src);
- iwp->src = NULL;
}
return FALSE;
}
@@ -131,11 +139,7 @@ static void io_remove_watch_poll(GSource *source)
IOWatchPoll *iwp;
iwp = io_watch_poll_from_source(source);
- if (iwp->src) {
- g_source_destroy(iwp->src);
- g_source_unref(iwp->src);
- iwp->src = NULL;
- }
+ io_watch_poll_destroy_source(iwp);
g_source_destroy(&iwp->parent);
}
Thanks,
Paolo
next prev parent reply other threads:[~2018-01-16 17:26 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-01-10 13:18 [Qemu-devel] [PATCH] chardev/char-socket: add POLLHUP handler Klim Kireev
2018-01-16 17:25 ` Paolo Bonzini [this message]
2018-01-18 9:41 ` klim
2018-01-18 9:44 ` Paolo Bonzini
2018-01-18 9:47 ` klim
2018-01-18 10:06 ` Daniel P. Berrange
2018-01-16 17:56 ` Marc-André Lureau
2018-01-16 18:01 ` Daniel P. Berrange
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=71031eb9-06a0-21fe-dd89-4b49e23b97be@redhat.com \
--to=pbonzini@redhat.com \
--cc=den@virtuozzo.com \
--cc=klim.kireev@virtuozzo.com \
--cc=marcandre.lureau@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).