From: "Michael S. Tsirkin" <mst@redhat.com>
To: "Xueming(Steven) Li" <xuemingl@nvidia.com>
Cc: "zhangyuwei.9149@bytedance.com" <zhangyuwei.9149@bytedance.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
"tiwei.bie@intel.com" <tiwei.bie@intel.com>,
"qemu-stable@nongnu.org" <qemu-stable@nongnu.org>
Subject: Re: [PATCH v3 1/2] vhost-user: fix VirtQ notifier cleanup
Date: Tue, 19 Oct 2021 02:57:35 -0400 [thread overview]
Message-ID: <20211019025722-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <4a1739ac3cdb895e41f7554865d5e1df4d70658c.camel@nvidia.com>
On Tue, Oct 19, 2021 at 06:45:19AM +0000, Xueming(Steven) Li wrote:
> On Tue, 2021-10-19 at 02:15 -0400, Michael S. Tsirkin wrote:
> > On Fri, Oct 08, 2021 at 03:58:04PM +0800, Xueming Li wrote:
> > > When vhost-user device cleanup and unmmap notifier address, VM cpu
> > > thread that writing the notifier failed with accessing invalid address.
> > >
> > > To avoid this concurrent issue, wait memory flatview update by draining
> > > rcu callbacks, then unmap notifiers.
> > >
> > > Fixes: 44866521bd6e ("vhost-user: support registering external host notifiers")
> > > Cc: tiwei.bie@intel.com
> > > Cc: qemu-stable@nongnu.org
> > > Cc: Yuwei Zhang <zhangyuwei.9149@bytedance.com>
> > > Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> > > ---
> > > hw/virtio/vhost-user.c | 21 ++++++++++++++-------
> > > 1 file changed, 14 insertions(+), 7 deletions(-)
> > >
> > > diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> > > index bf6e50223c..b2e948bdc7 100644
> > > --- a/hw/virtio/vhost-user.c
> > > +++ b/hw/virtio/vhost-user.c
> > > @@ -1165,6 +1165,12 @@ static void vhost_user_host_notifier_remove(struct vhost_dev *dev,
> > >
> > > if (n->addr && n->set) {
> > > virtio_queue_set_host_notifier_mr(vdev, queue_idx, &n->mr, false);
> > > + if (!qemu_in_vcpu_thread()) { /* Avoid vCPU dead lock. */
> > > + /* Wait for VM threads accessing old flatview which contains notifier. */
> > > + drain_call_rcu();
> > > + }
> > > + munmap(n->addr, qemu_real_host_page_size);
> > > + n->addr = NULL;
> > > n->set = false;
> > > }
> > > }
> >
> >
> > ../hw/virtio/vhost-user.c: In function ‘vhost_user_host_notifier_remove’:
> > ../hw/virtio/vhost-user.c:1168:14: error: implicit declaration of function ‘qemu_in_vcpu_thread’ [-Werror=implicit-function-declaration]
> > 1168 | if (!qemu_in_vcpu_thread()) { /* Avoid vCPU dead lock. */
> > | ^~~~~~~~~~~~~~~~~~~
> > ../hw/virtio/vhost-user.c:1168:14: error: nested extern declaration of ‘qemu_in_vcpu_thread’ [-Werror=nested-externs]
> > cc1: all warnings being treated as errors
> > ninja: build stopped: subcommand failed.
> > make[1]: *** [Makefile:162: run-ninja] Error 1
> > make[1]: Leaving directory '/scm/qemu/build'
> > make: *** [GNUmakefile:11: all] Error 2
> >
> >
> > Although the following patch fixes it, bisect is broken.
>
> Yes, really an issue, v4 posted, thanks!
Pls address the comment on 2/2 too.
> >
> >
> > > @@ -1502,12 +1508,7 @@ static int vhost_user_slave_handle_vring_host_notifier(struct vhost_dev *dev,
> > >
> > > n = &user->notifier[queue_idx];
> > >
> > > - if (n->addr) {
> > > - virtio_queue_set_host_notifier_mr(vdev, queue_idx, &n->mr, false);
> > > - object_unparent(OBJECT(&n->mr));
> > > - munmap(n->addr, page_size);
> > > - n->addr = NULL;
> > > - }
> > > + vhost_user_host_notifier_remove(dev, queue_idx);
> > >
> > > if (area->u64 & VHOST_USER_VRING_NOFD_MASK) {
> > > return 0;
> > > @@ -2485,11 +2486,17 @@ void vhost_user_cleanup(VhostUserState *user)
> > > for (i = 0; i < VIRTIO_QUEUE_MAX; i++) {
> > > if (user->notifier[i].addr) {
> > > object_unparent(OBJECT(&user->notifier[i].mr));
> > > + }
> > > + }
> > > + memory_region_transaction_commit();
> > > + /* Wait for VM threads accessing old flatview which contains notifier. */
> > > + drain_call_rcu();
> > > + for (i = 0; i < VIRTIO_QUEUE_MAX; i++) {
> > > + if (user->notifier[i].addr) {
> > > munmap(user->notifier[i].addr, qemu_real_host_page_size);
> > > user->notifier[i].addr = NULL;
> > > }
> > > }
> > > - memory_region_transaction_commit();
> > > user->chr = NULL;
> > > }
> > >
> > > --
> > > 2.33.0
> >
>
next prev parent reply other threads:[~2021-10-19 7:03 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-08 7:58 [PATCH v3 0/2] Improve vhost-user VQ notifier unmap Xueming Li
2021-10-08 7:58 ` [PATCH v3 1/2] vhost-user: fix VirtQ notifier cleanup Xueming Li
2021-10-19 6:15 ` Michael S. Tsirkin
2021-10-19 6:45 ` Xueming(Steven) Li
2021-10-19 6:57 ` Michael S. Tsirkin [this message]
2021-10-08 7:58 ` [PATCH v3 2/2] vhost-user: remove VirtQ notifier restore Xueming Li
2021-10-19 6:37 ` Michael S. Tsirkin
2021-10-19 7:21 ` Xueming(Steven) Li
2021-10-19 7:24 ` Michael S. Tsirkin
2021-10-19 8:00 ` Xueming(Steven) Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211019025722-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-stable@nongnu.org \
--cc=tiwei.bie@intel.com \
--cc=xuemingl@nvidia.com \
--cc=zhangyuwei.9149@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).