qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3] qemu: send stop event after bdrv_flush_all
@ 2023-12-05  9:19 tianren
  2023-12-05  9:29 ` Daniel P. Berrangé
  0 siblings, 1 reply; 2+ messages in thread
From: tianren @ 2023-12-05  9:19 UTC (permalink / raw)
  To: qemu-devel; +Cc: berrange, richard.henderson, pbonzini, Tianren Zhang

From: Tianren Zhang <tianren@smartx.com>

The stop process is not finished until bdrv_flush_all
is done. Some users (e.g., libvirt) detect the STOP
event and invokes some lock release logic to revoke
the disk lock held by current qemu when such event is
emitted. In such case, if the bdrv_flush_all is after
the stop event, it's possible that the disk lock is
released while the qemu is still waiting for I/O.
Therefore, it's better to have the stop event generated
after the whole stop process is done, so we can
guarantee to users that the stop process is finished
when they get the STOP event.

Signed-off-by: Tianren Zhang <tianren@smartx.com>
---
v2: do not call runstate_is_running twice
v3: remove irrelevant info from commit msg
---
 system/cpus.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/system/cpus.c b/system/cpus.c
index a444a747f0..49af0f92b5 100644
--- a/system/cpus.c
+++ b/system/cpus.c
@@ -262,21 +262,24 @@ void cpu_interrupt(CPUState *cpu, int mask)
 static int do_vm_stop(RunState state, bool send_stop)
 {
     int ret = 0;
+    bool do_send_stop = false;
 
     if (runstate_is_running()) {
         runstate_set(state);
         cpu_disable_ticks();
         pause_all_vcpus();
         vm_state_notify(0, state);
-        if (send_stop) {
-            qapi_event_send_stop();
-        }
+        do_send_stop = send_stop;
     }
 
     bdrv_drain_all();
     ret = bdrv_flush_all();
     trace_vm_stop_flush_all(ret);
 
+    if (do_send_stop) {
+        qapi_event_send_stop();
+    }
+
     return ret;
 }
 
-- 
2.41.0



^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH v3] qemu: send stop event after bdrv_flush_all
  2023-12-05  9:19 [PATCH v3] qemu: send stop event after bdrv_flush_all tianren
@ 2023-12-05  9:29 ` Daniel P. Berrangé
  0 siblings, 0 replies; 2+ messages in thread
From: Daniel P. Berrangé @ 2023-12-05  9:29 UTC (permalink / raw)
  To: tianren; +Cc: qemu-devel, richard.henderson, pbonzini

On Tue, Dec 05, 2023 at 04:19:03AM -0500, tianren@smartx.com wrote:
> From: Tianren Zhang <tianren@smartx.com>
> 
> The stop process is not finished until bdrv_flush_all
> is done. Some users (e.g., libvirt) detect the STOP
> event and invokes some lock release logic to revoke
> the disk lock held by current qemu when such event is
> emitted. In such case, if the bdrv_flush_all is after
> the stop event, it's possible that the disk lock is
> released while the qemu is still waiting for I/O.
> Therefore, it's better to have the stop event generated
> after the whole stop process is done, so we can
> guarantee to users that the stop process is finished
> when they get the STOP event.
> 
> Signed-off-by: Tianren Zhang <tianren@smartx.com>
> ---
> v2: do not call runstate_is_running twice
> v3: remove irrelevant info from commit msg
> ---
>  system/cpus.c | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>


With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2023-12-05  9:30 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-12-05  9:19 [PATCH v3] qemu: send stop event after bdrv_flush_all tianren
2023-12-05  9:29 ` Daniel P. Berrangé

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).