qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Raphael Norwitz <raphael.norwitz@nutanix.com>
Cc: "Alex Bennée" <alex.bennee@linaro.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"slp@redhat.com" <slp@redhat.com>,
	"marcandre.lureau@redhat.com" <marcandre.lureau@redhat.com>,
	"Stefan Hajnoczi" <stefanha@redhat.com>,
	"Mathieu Poirier" <mathieu.poirier@linaro.org>,
	"Viresh Kumar" <viresh.kumar@linaro.org>,
	"Stefano Garzarella" <sgarzare@redhat.com>,
	"Kevin Wolf" <kwolf@redhat.com>,
	"Hanna Reitz" <hreitz@redhat.com>,
	"open list:Block layer core" <qemu-block@nongnu.org>
Subject: Re: [PATCH  v3 7/7] hw/virtio: generalise CHR_EVENT_CLOSED handling
Date: Tue, 29 Nov 2022 00:30:31 -0500	[thread overview]
Message-ID: <20221129002939-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <4B9A0F71-8134-476F-ADFB-22FCFC142462@nutanix.com>

On Tue, Nov 29, 2022 at 05:18:58AM +0000, Raphael Norwitz wrote:
> > On Nov 28, 2022, at 11:41 AM, Alex Bennée <alex.bennee@linaro.org> wrote:
> > 
> > ..and use for both virtio-user-blk and virtio-user-gpio. This avoids
> > the circular close by deferring shutdown due to disconnection until a
> > later point. virtio-user-blk already had this mechanism in place so
> 
> The mechanism was originally copied from virtio-net so we should probably fix it there too. AFAICT calling vhost_user_async_close() should work in net_vhost_user_event().
> 
> Otherwise the code looks good modulo a few nits. Happy to see the duplicated logic is being generalized.

If you do, separate patch pls and does not have to block this series.

> 
> > generalise it as a vhost-user helper function and use for both blk and
> > gpio devices.
> > 
> > While we are at it we also fix up vhost-user-gpio to re-establish the
> > event handler after close down so we can reconnect later.
> > 
> > Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
> > ---
> > include/hw/virtio/vhost-user.h | 18 +++++++++
> > hw/block/vhost-user-blk.c      | 41 +++-----------------
> > hw/virtio/vhost-user-gpio.c    | 11 +++++-
> > hw/virtio/vhost-user.c         | 71 ++++++++++++++++++++++++++++++++++
> > 4 files changed, 104 insertions(+), 37 deletions(-)
> > 
> > diff --git a/include/hw/virtio/vhost-user.h b/include/hw/virtio/vhost-user.h
> > index c6e693cd3f..191216a74f 100644
> > --- a/include/hw/virtio/vhost-user.h
> > +++ b/include/hw/virtio/vhost-user.h
> > @@ -68,4 +68,22 @@ bool vhost_user_init(VhostUserState *user, CharBackend *chr, Error **errp);
> >  */
> > void vhost_user_cleanup(VhostUserState *user);
> > 
> > +/**
> > + * vhost_user_async_close() - cleanup vhost-user post connection drop
> > + * @d: DeviceState for the associated device (passed to callback)
> > + * @chardev: the CharBackend associated with the connection
> > + * @vhost: the common vhost device
> > + * @cb: the user callback function to complete the clean-up
> > + *
> > + * This function is used to handle the shutdown of a vhost-user
> > + * connection to a backend. We handle this centrally to make sure we
> > + * do all the steps and handle potential races due to VM shutdowns.
> > + * Once the connection is disabled we call a backhalf to ensure
> > + */
> > +typedef void (*vu_async_close_fn)(DeviceState *cb);
> > +
> > +void vhost_user_async_close(DeviceState *d,
> > +                            CharBackend *chardev, struct vhost_dev *vhost,
> > +                            vu_async_close_fn cb);
> > +
> > #endif
> > diff --git a/hw/block/vhost-user-blk.c b/hw/block/vhost-user-blk.c
> > index 1177064631..aff4d2b8cb 100644
> > --- a/hw/block/vhost-user-blk.c
> > +++ b/hw/block/vhost-user-blk.c
> > @@ -369,17 +369,10 @@ static void vhost_user_blk_disconnect(DeviceState *dev)
> >     vhost_user_blk_stop(vdev);
> > 
> >     vhost_dev_cleanup(&s->dev);
> > -}
> > 
> > -static void vhost_user_blk_chr_closed_bh(void *opaque)
> > -{
> > -    DeviceState *dev = opaque;
> > -    VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > -    VHostUserBlk *s = VHOST_USER_BLK(vdev);
> > -
> > -    vhost_user_blk_disconnect(dev);
> > +    /* Re-instate the event handler for new connections */
> >     qemu_chr_fe_set_handlers(&s->chardev, NULL, NULL, vhost_user_blk_event,
> > -                             NULL, opaque, NULL, true);
> > +                             NULL, dev, NULL, true);
> > }
> > 
> > static void vhost_user_blk_event(void *opaque, QEMUChrEvent event)
> > @@ -398,33 +391,9 @@ static void vhost_user_blk_event(void *opaque, QEMUChrEvent event)
> >         }
> >         break;
> >     case CHR_EVENT_CLOSED:
> > -        if (!runstate_check(RUN_STATE_SHUTDOWN)) {
> > -            /*
> > -             * A close event may happen during a read/write, but vhost
> > -             * code assumes the vhost_dev remains setup, so delay the
> > -             * stop & clear.
> > -             */
> > -            AioContext *ctx = qemu_get_current_aio_context();
> > -
> > -            qemu_chr_fe_set_handlers(&s->chardev, NULL, NULL, NULL, NULL,
> > -                    NULL, NULL, false);
> > -            aio_bh_schedule_oneshot(ctx, vhost_user_blk_chr_closed_bh, opaque);
> > -
> > -            /*
> > -             * Move vhost device to the stopped state. The vhost-user device
> > -             * will be clean up and disconnected in BH. This can be useful in
> > -             * the vhost migration code. If disconnect was caught there is an
> > -             * option for the general vhost code to get the dev state without
> > -             * knowing its type (in this case vhost-user).
> > -             *
> > -             * FIXME: this is sketchy to be reaching into vhost_dev
> > -             * now because we are forcing something that implies we
> > -             * have executed vhost_dev_stop() but that won't happen
> > -             * until vhost_user_blk_stop() gets called from the bh.
> > -             * Really this state check should be tracked locally.
> > -             */
> > -            s->dev.started = false;
> > -        }
> > +        /* defer close until later to avoid circular close */
> > +        vhost_user_async_close(dev, &s->chardev, &s->dev,
> > +                               vhost_user_blk_disconnect);
> >         break;
> >     case CHR_EVENT_BREAK:
> >     case CHR_EVENT_MUX_IN:
> > diff --git a/hw/virtio/vhost-user-gpio.c b/hw/virtio/vhost-user-gpio.c
> > index 75e28bcd3b..cd76287766 100644
> > --- a/hw/virtio/vhost-user-gpio.c
> > +++ b/hw/virtio/vhost-user-gpio.c
> > @@ -239,6 +239,8 @@ static int vu_gpio_connect(DeviceState *dev, Error **errp)
> >     return 0;
> > }
> > 
> > +static void vu_gpio_event(void *opaque, QEMUChrEvent event);
> > +
> > static void vu_gpio_disconnect(DeviceState *dev)
> > {
> >     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > @@ -251,6 +253,11 @@ static void vu_gpio_disconnect(DeviceState *dev)
> > 
> >     vu_gpio_stop(vdev);
> >     vhost_dev_cleanup(&gpio->vhost_dev);
> > +
> > +    /* Re-instate the event handler for new connections */
> > +    qemu_chr_fe_set_handlers(&gpio->chardev,
> > +                             NULL, NULL, vu_gpio_event,
> > +                             NULL, dev, NULL, true);
> > }
> > 
> > static void vu_gpio_event(void *opaque, QEMUChrEvent event)
> > @@ -268,7 +275,9 @@ static void vu_gpio_event(void *opaque, QEMUChrEvent event)
> >         }
> >         break;
> >     case CHR_EVENT_CLOSED:
> > -        vu_gpio_disconnect(dev);
> > +        /* defer close until later to avoid circular close */
> > +        vhost_user_async_close(dev, &gpio->chardev, &gpio->vhost_dev,
> > +                               vu_gpio_disconnect);
> >         break;
> >     case CHR_EVENT_BREAK:
> >     case CHR_EVENT_MUX_IN:
> > diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> > index abe23d4ebe..8f635844af 100644
> > --- a/hw/virtio/vhost-user.c
> > +++ b/hw/virtio/vhost-user.c
> > @@ -21,6 +21,7 @@
> > #include "qemu/error-report.h"
> > #include "qemu/main-loop.h"
> > #include "qemu/sockets.h"
> > +#include "sysemu/runstate.h"
> > #include "sysemu/cryptodev.h"
> > #include "migration/migration.h"
> > #include "migration/postcopy-ram.h"
> > @@ -2670,6 +2671,76 @@ void vhost_user_cleanup(VhostUserState *user)
> >     user->chr = NULL;
> > }
> > 
> 
> nit: Extra space 
> 
> > +
> > +typedef struct {
> > +    vu_async_close_fn cb;
> > +    DeviceState *dev;
> > +    CharBackend *cd;
> > +    struct vhost_dev *vhost;
> > +} VhostAsyncCallback;
> > +
> > +static void vhost_user_async_close_bh(void *opaque)
> > +{
> > +    VhostAsyncCallback *data = opaque;
> > +    struct vhost_dev *vhost = data->vhost;
> > +
> > +    /*
> > +     * If the vhost_dev has been cleared in the meantime there is
> > +     * nothing left to do as some other path has completed the
> > +     * cleanup.
> > +     */
> > +    if (vhost->vdev) {
> > +        data->cb(data->dev);
> > +    }
> > +
> > +    g_free(data);
> > +}
> > +
> > +/*
> > + * We only schedule the work if the machine is running. If suspended
> > + * we want to keep all the in-flight data as is for migration
> > + * purposes.
> > + */
> > +void vhost_user_async_close(DeviceState *d,
> > +                            CharBackend *chardev, struct vhost_dev *vhost,
> > +                            vu_async_close_fn cb)
> > +{
> > +    if (!runstate_check(RUN_STATE_SHUTDOWN)) {
> > +        /*
> > +         * A close event may happen during a read/write, but vhost
> > +         * code assumes the vhost_dev remains setup, so delay the
> > +         * stop & clear.
> > +         */
> > +        AioContext *ctx = qemu_get_current_aio_context();
> > +        VhostAsyncCallback *data = g_new0(VhostAsyncCallback, 1);
> > +
> > +        /* Save data for the callback */
> > +        data->cb = cb;
> > +        data->dev = d;
> > +        data->cd = chardev;
> > +        data->vhost = vhost;
> > +
> > +        /* Disable any further notifications on the chardev */
> > +        qemu_chr_fe_set_handlers(chardev,
> > +                                 NULL, NULL, NULL, NULL, NULL, NULL,
> > +                                 false);
> > +
> > +        aio_bh_schedule_oneshot(ctx, vhost_user_async_close_bh, data);
> > +
> > +        /*
> > +         * Move vhost device to the stopped state. The vhost-user device
> 
> Not this change’s fault but we should fix up the grammar here i.e. s/clean/cleaned/ and probably should be “disconnected in the BH”…etc.
> 
> > +         * will be clean up and disconnected in BH. This can be useful in
> > +         * the vhost migration code. If disconnect was caught there is an
> > +         * option for the general vhost code to get the dev state without
> > +         * knowing its type (in this case vhost-user).
> > +         *
> > +         * Note if the vhost device is fully cleared by the time we
> > +         * execute the bottom half we won't continue with the cleanup.
> > +         */
> > +        vhost->started = false;
> > +    }
> > +}
> > +
> > static int vhost_user_dev_start(struct vhost_dev *dev, bool started)
> > {
> >     if (!virtio_has_feature(dev->protocol_features,
> > -- 
> > 2.34.1
> > 
> 



  reply	other threads:[~2022-11-29  5:31 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-28 16:40 [PATCH for 7.2-rc3 v3 0/7] fix vhost-user issues with CI Alex Bennée
2022-11-28 16:40 ` [PATCH v3 1/7] include/hw: attempt to document VirtIO feature variables Alex Bennée
2022-11-29  8:54   ` Stefano Garzarella
2022-11-28 16:41 ` [PATCH v3 2/7] include/hw: VM state takes precedence in virtio_device_should_start Alex Bennée
2022-11-28 17:09   ` Michael S. Tsirkin
2022-11-28 16:41 ` [PATCH v3 3/7] tests/qtests: override "force-legacy" for gpio virtio-mmio tests Alex Bennée
2022-11-28 16:41 ` [PATCH v3 4/7] hw/virtio: ensure a valid host_feature set for virtio-user-gpio Alex Bennée
2022-11-28 18:39   ` Stefan Hajnoczi
2022-11-28 19:39     ` Alex Bennée
2022-11-29 15:48       ` Michael S. Tsirkin
2022-11-29 21:01       ` Stefan Hajnoczi
2022-11-29 21:13         ` Michael S. Tsirkin
2022-11-29 22:53         ` Alex Bennée
2022-11-28 16:41 ` [PATCH v3 5/7] vhost: enable vrings in vhost_dev_start() for vhost-user devices Alex Bennée
2022-11-28 16:41 ` [PATCH v3 6/7] hw/virtio: add started_vu status field to vhost-user-gpio Alex Bennée
2022-11-28 16:41 ` [PATCH v3 7/7] hw/virtio: generalise CHR_EVENT_CLOSED handling Alex Bennée
2022-11-29  5:18   ` Raphael Norwitz
2022-11-29  5:30     ` Michael S. Tsirkin [this message]
2022-11-29 14:24       ` Raphael Norwitz
2022-11-30 10:25         ` Alex Bennée
2022-11-30 10:28           ` Michael S. Tsirkin
2022-11-30 11:28             ` Alex Bennée
2022-11-30  6:57 ` [PATCH for 7.2-rc3 v3 0/7] fix vhost-user issues with CI Michael S. Tsirkin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20221129002939-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=alex.bennee@linaro.org \
    --cc=hreitz@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=marcandre.lureau@redhat.com \
    --cc=mathieu.poirier@linaro.org \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=raphael.norwitz@nutanix.com \
    --cc=sgarzare@redhat.com \
    --cc=slp@redhat.com \
    --cc=stefanha@redhat.com \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).