qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] monitor/qmp: resume monitor when clearing its queue
@ 2019-10-24  8:12 Wolfgang Bumiller
  2019-11-13 16:45 ` Markus Armbruster
  0 siblings, 1 reply; 3+ messages in thread
From: Wolfgang Bumiller @ 2019-10-24  8:12 UTC (permalink / raw)
  To: qemu-devel
  Cc: Michael Roth, Marc-André Lureau, Gerd Hoffmann,
	Markus Armbruster, qemu-stable

When a monitor's queue is filled up in handle_qmp_command()
it gets suspended. It's the dispatcher bh's job currently to
resume the monitor, which it does after processing an event
from the queue. However, it is possible for a
CHR_EVENT_CLOSED event to be processed before before the bh
is scheduled, which will clear the queue without resuming
the monitor, thereby preventing the dispatcher from reaching
the resume() call.
Any new connections to the qmp socket will be accept()ed and
show the greeting, but will not respond to any messages sent
afterwards (as they will not be read from the
still-suspended socket).
Fix this by resuming the monitor when clearing a queue which
was filled up.

Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
---
Changes to v1:
  * Update commit message to include the resulting symptoms.
  * Moved the resume code from `monitor_qmp_cleanup_req_queue_locked` to
    `monitor_qmp_cleanup_queues` to avoid an unnecessary resume when
    destroying the monitor (as the `_locked` version is also used by
    `monitor_data_destroy()`.
  * Renamed `monitor_qmp_cleanup_queues` to
    `monitor_qmp_cleanup_queues_and_resume` to reflect the change and be
    verbose about it for potential future users of the function.
    Currently the only user is `monitor_qmp_event()` in the
    `CHR_EVENT_CLOSED` case, which is exactly the problematic case currently.

Sorry for the deleay :|

 monitor/qmp.c | 24 ++++++++++++++++++++++--
 1 file changed, 22 insertions(+), 2 deletions(-)

diff --git a/monitor/qmp.c b/monitor/qmp.c
index 9d9e5d8b27..df689aa95e 100644
--- a/monitor/qmp.c
+++ b/monitor/qmp.c
@@ -75,10 +75,30 @@ static void monitor_qmp_cleanup_req_queue_locked(MonitorQMP *mon)
     }
 }
 
-static void monitor_qmp_cleanup_queues(MonitorQMP *mon)
+static void monitor_qmp_cleanup_queues_and_resume(MonitorQMP *mon)
 {
     qemu_mutex_lock(&mon->qmp_queue_lock);
+
+    /*
+     * Same condition as in monitor_qmp_bh_dispatcher(), but before removing an
+     * element from the queue (hence no `- 1`), also, the queue should not be
+     * empty either, otherwise the monitor hasn't been suspended yet (or was
+     * already resumed).
+     */
+    bool need_resume = (!qmp_oob_enabled(mon) && mon->qmp_requests->length > 0)
+        || mon->qmp_requests->length == QMP_REQ_QUEUE_LEN_MAX;
+
     monitor_qmp_cleanup_req_queue_locked(mon);
+
+    if (need_resume) {
+        /*
+         * Pairs with the monitor_suspend() in handle_qmp_command() in case the
+         * queue gets cleared from a CH_EVENT_CLOSED event before the dispatch
+         * bh got scheduled.
+         */
+        monitor_resume(&mon->common);
+    }
+
     qemu_mutex_unlock(&mon->qmp_queue_lock);
 }
 
@@ -332,7 +352,7 @@ static void monitor_qmp_event(void *opaque, int event)
          * stdio, it's possible that stdout is still open when stdin
          * is closed.
          */
-        monitor_qmp_cleanup_queues(mon);
+        monitor_qmp_cleanup_queues_and_resume(mon);
         json_message_parser_destroy(&mon->parser);
         json_message_parser_init(&mon->parser, handle_qmp_command,
                                  mon, NULL);
-- 
2.20.1




^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2019-11-15  8:26 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-10-24  8:12 [PATCH v2] monitor/qmp: resume monitor when clearing its queue Wolfgang Bumiller
2019-11-13 16:45 ` Markus Armbruster
2019-11-15  8:25   ` Wolfgang Bumiller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).