From: "Michael S. Tsirkin" <mst@redhat.com>
To: Eli Cohen <elic@nvidia.com>
Cc: Jason Wang <jasowang@redhat.com>,
virtualization@lists.linux-foundation.org,
netdev <netdev@vger.kernel.org>,
lingshan.zhu@intel.com
Subject: Re: [PATCH] vhost: Use mutex to protect vq_irq setup
Date: Thu, 29 Oct 2020 17:45:42 -0400 [thread overview]
Message-ID: <20201029174526-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20201029073717.GA132479@mtl-vdi-166.wap.labs.mlnx>
On Thu, Oct 29, 2020 at 09:37:17AM +0200, Eli Cohen wrote:
> On Thu, Oct 29, 2020 at 03:03:24PM +0800, Jason Wang wrote:
> >
> > On 2020/10/28 下午10:20, Eli Cohen wrote:
> > > Both irq_bypass_register_producer() and irq_bypass_unregister_producer()
> > > require process context to run. Change the call context lock from
> > > spinlock to mutex to protect the setup process to avoid deadlocks.
> > >
> > > Fixes: 265a0ad8731d ("vhost: introduce vhost_vring_call")
> > > Signed-off-by: Eli Cohen <elic@nvidia.com>
> >
> >
> > Hi Eli:
> >
> > During review we spot that the spinlock is not necessary. And it was already
> > protected by vq mutex. So it was removed in this commit:
> >
> > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=86e182fe12ee5869022614457037097c70fe2ed1
> >
> > Thanks
> >
>
> I see, thanks.
>
> BTW, while testing irq bypassing, I noticed that qemu started crashing
> and I fail to boot the VM? Is that a known issue. I checked using
> updated master branch of qemu updated yesterday.
>
> Any ideas how to check this further?
> Did anyone actually check that irq bypassing works?
Confused. Is the crash related to this patch somehow?
> >
> > > ---
> > > drivers/vhost/vdpa.c | 10 +++++-----
> > > drivers/vhost/vhost.c | 6 +++---
> > > drivers/vhost/vhost.h | 3 ++-
> > > 3 files changed, 10 insertions(+), 9 deletions(-)
> > >
> > > diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
> > > index be783592fe58..0a744f2b6e76 100644
> > > --- a/drivers/vhost/vdpa.c
> > > +++ b/drivers/vhost/vdpa.c
> > > @@ -98,26 +98,26 @@ static void vhost_vdpa_setup_vq_irq(struct vhost_vdpa *v, u16 qid)
> > > return;
> > > irq = ops->get_vq_irq(vdpa, qid);
> > > - spin_lock(&vq->call_ctx.ctx_lock);
> > > + mutex_lock(&vq->call_ctx.ctx_lock);
> > > irq_bypass_unregister_producer(&vq->call_ctx.producer);
> > > if (!vq->call_ctx.ctx || irq < 0) {
> > > - spin_unlock(&vq->call_ctx.ctx_lock);
> > > + mutex_unlock(&vq->call_ctx.ctx_lock);
> > > return;
> > > }
> > > vq->call_ctx.producer.token = vq->call_ctx.ctx;
> > > vq->call_ctx.producer.irq = irq;
> > > ret = irq_bypass_register_producer(&vq->call_ctx.producer);
> > > - spin_unlock(&vq->call_ctx.ctx_lock);
> > > + mutex_unlock(&vq->call_ctx.ctx_lock);
> > > }
> > > static void vhost_vdpa_unsetup_vq_irq(struct vhost_vdpa *v, u16 qid)
> > > {
> > > struct vhost_virtqueue *vq = &v->vqs[qid];
> > > - spin_lock(&vq->call_ctx.ctx_lock);
> > > + mutex_lock(&vq->call_ctx.ctx_lock);
> > > irq_bypass_unregister_producer(&vq->call_ctx.producer);
> > > - spin_unlock(&vq->call_ctx.ctx_lock);
> > > + mutex_unlock(&vq->call_ctx.ctx_lock);
> > > }
> > > static void vhost_vdpa_reset(struct vhost_vdpa *v)
> > > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> > > index 9ad45e1d27f0..938239e11455 100644
> > > --- a/drivers/vhost/vhost.c
> > > +++ b/drivers/vhost/vhost.c
> > > @@ -302,7 +302,7 @@ static void vhost_vring_call_reset(struct vhost_vring_call *call_ctx)
> > > {
> > > call_ctx->ctx = NULL;
> > > memset(&call_ctx->producer, 0x0, sizeof(struct irq_bypass_producer));
> > > - spin_lock_init(&call_ctx->ctx_lock);
> > > + mutex_init(&call_ctx->ctx_lock);
> > > }
> > > static void vhost_vq_reset(struct vhost_dev *dev,
> > > @@ -1650,9 +1650,9 @@ long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *arg
> > > break;
> > > }
> > > - spin_lock(&vq->call_ctx.ctx_lock);
> > > + mutex_lock(&vq->call_ctx.ctx_lock);
> > > swap(ctx, vq->call_ctx.ctx);
> > > - spin_unlock(&vq->call_ctx.ctx_lock);
> > > + mutex_unlock(&vq->call_ctx.ctx_lock);
> > > break;
> > > case VHOST_SET_VRING_ERR:
> > > if (copy_from_user(&f, argp, sizeof f)) {
> > > diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
> > > index 9032d3c2a9f4..e8855ea04205 100644
> > > --- a/drivers/vhost/vhost.h
> > > +++ b/drivers/vhost/vhost.h
> > > @@ -64,7 +64,8 @@ enum vhost_uaddr_type {
> > > struct vhost_vring_call {
> > > struct eventfd_ctx *ctx;
> > > struct irq_bypass_producer producer;
> > > - spinlock_t ctx_lock;
> > > + /* protect vq irq setup */
> > > + struct mutex ctx_lock;
> > > };
> > > /* The virtqueue structure describes a queue attached to a device. */
> >
prev parent reply other threads:[~2020-10-29 21:45 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-10-28 14:20 [PATCH] vhost: Use mutex to protect vq_irq setup Eli Cohen
2020-10-29 7:03 ` Jason Wang
2020-10-29 7:37 ` Eli Cohen
2020-10-29 7:39 ` Jason Wang
2020-10-29 7:50 ` Eli Cohen
2020-10-29 8:08 ` Jason Wang
2020-10-29 10:09 ` Eli Cohen
2020-10-29 21:45 ` Michael S. Tsirkin [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201029174526-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=elic@nvidia.com \
--cc=jasowang@redhat.com \
--cc=lingshan.zhu@intel.com \
--cc=netdev@vger.kernel.org \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).