virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Xie Yongji <xieyongji@bytedance.com>
Cc: virtualization@lists.linux-foundation.org
Subject: Re: [PATCH v2] vduse: Fix race condition between resetting and irq injecting
Date: Wed, 13 Oct 2021 07:10:12 -0400	[thread overview]
Message-ID: <20211013070657-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20210929083050.88-1-xieyongji@bytedance.com>

On Wed, Sep 29, 2021 at 04:30:50PM +0800, Xie Yongji wrote:
> The interrupt might be triggered after a reset since there is
> no synchronization between resetting and irq injecting.

In fact, irq_lock is already used to synchronize with
irqs. Why isn't taking and releasing it enough?

> And it
> might break something if the interrupt is delayed until a new
> round of device initialization.
> 
> Fixes: c8a6153b6c59 ("vduse: Introduce VDUSE - vDPA Device in Userspace")
> Signed-off-by: Xie Yongji <xieyongji@bytedance.com>
> ---
>  drivers/vdpa/vdpa_user/vduse_dev.c | 37 +++++++++++++++++++++++++------------
>  1 file changed, 25 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
> index cefb301b2ee4..841667a896dd 100644
> --- a/drivers/vdpa/vdpa_user/vduse_dev.c
> +++ b/drivers/vdpa/vdpa_user/vduse_dev.c
> @@ -80,6 +80,7 @@ struct vduse_dev {
>  	struct vdpa_callback config_cb;
>  	struct work_struct inject;
>  	spinlock_t irq_lock;
> +	struct rw_semaphore rwsem;
>  	int minor;
>  	bool broken;
>  	bool connected;

What does this lock protect? Use a more descriptive name pls,
and maybe add a comment.


> @@ -410,6 +411,8 @@ static void vduse_dev_reset(struct vduse_dev *dev)
>  	if (domain->bounce_map)
>  		vduse_domain_reset_bounce_map(domain);
>  
> +	down_write(&dev->rwsem);
> +
>  	dev->status = 0;
>  	dev->driver_features = 0;
>  	dev->generation++;
> @@ -443,6 +446,8 @@ static void vduse_dev_reset(struct vduse_dev *dev)
>  		flush_work(&vq->inject);
>  		flush_work(&vq->kick);
>  	}
> +
> +	up_write(&dev->rwsem);
>  }
>  
>  static int vduse_vdpa_set_vq_address(struct vdpa_device *vdpa, u16 idx,
> @@ -885,6 +890,23 @@ static void vduse_vq_irq_inject(struct work_struct *work)
>  	spin_unlock_irq(&vq->irq_lock);
>  }
>  
> +static int vduse_dev_queue_irq_work(struct vduse_dev *dev,
> +				    struct work_struct *irq_work)
> +{
> +	int ret = -EINVAL;
> +
> +	down_read(&dev->rwsem);
> +	if (!(dev->status & VIRTIO_CONFIG_S_DRIVER_OK))
> +		goto unlock;
> +
> +	ret = 0;
> +	queue_work(vduse_irq_wq, irq_work);
> +unlock:
> +	up_read(&dev->rwsem);
> +
> +	return ret;
> +}
> +
>  static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
>  			    unsigned long arg)
>  {


so that's a lot of overhead for an irq.
Normally the way to address races like this is to add
flushing to the reset path, not locking to irq path.


> @@ -966,12 +988,7 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
>  		break;
>  	}
>  	case VDUSE_DEV_INJECT_CONFIG_IRQ:
> -		ret = -EINVAL;
> -		if (!(dev->status & VIRTIO_CONFIG_S_DRIVER_OK))
> -			break;
> -
> -		ret = 0;
> -		queue_work(vduse_irq_wq, &dev->inject);
> +		ret = vduse_dev_queue_irq_work(dev, &dev->inject);
>  		break;
>  	case VDUSE_VQ_SETUP: {
>  		struct vduse_vq_config config;
> @@ -1049,10 +1066,6 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
>  	case VDUSE_VQ_INJECT_IRQ: {
>  		u32 index;
>  
> -		ret = -EINVAL;
> -		if (!(dev->status & VIRTIO_CONFIG_S_DRIVER_OK))
> -			break;
> -
>  		ret = -EFAULT;
>  		if (get_user(index, (u32 __user *)argp))
>  			break;
> @@ -1061,9 +1074,8 @@ static long vduse_dev_ioctl(struct file *file, unsigned int cmd,
>  		if (index >= dev->vq_num)
>  			break;
>  
> -		ret = 0;
>  		index = array_index_nospec(index, dev->vq_num);
> -		queue_work(vduse_irq_wq, &dev->vqs[index].inject);
> +		ret = vduse_dev_queue_irq_work(dev, &dev->vqs[index].inject);
>  		break;
>  	}
>  	default:
> @@ -1144,6 +1156,7 @@ static struct vduse_dev *vduse_dev_create(void)
>  	INIT_LIST_HEAD(&dev->send_list);
>  	INIT_LIST_HEAD(&dev->recv_list);
>  	spin_lock_init(&dev->irq_lock);
> +	init_rwsem(&dev->rwsem);
>  
>  	INIT_WORK(&dev->inject, vduse_dev_irq_inject);
>  	init_waitqueue_head(&dev->waitq);
> -- 
> 2.11.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  parent reply	other threads:[~2021-10-13 11:10 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20210929083050.88-1-xieyongji@bytedance.com>
2021-09-29  8:40 ` [PATCH v2] vduse: Fix race condition between resetting and irq injecting Jason Wang
     [not found]   ` <CACycT3vp-kxMGVL8W=ebQgOFt_aWs5Y33ZML-Up8KuwsTfQCwA@mail.gmail.com>
2021-10-13 11:06     ` Michael S. Tsirkin
2021-10-13 11:10 ` Michael S. Tsirkin [this message]
     [not found]   ` <CACycT3tSP-Vxt_u+ow71ZzxBjKuGycZ1LqUrbjQ6Ew3ehX7kqw@mail.gmail.com>
2021-10-13 12:34     ` Michael S. Tsirkin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211013070657-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=xieyongji@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).