public inbox for virtualization@lists.linux-foundation.org
 help / color / mirror / Atom feed
* [PATCH 1/2] virtio-blk: Fix hot-unplug race in remove method
@ 2012-05-03  2:19 Asias He
  2012-05-03  5:02 ` Sasha Levin
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Asias He @ 2012-05-03  2:19 UTC (permalink / raw)
  To: virtualization, Rusty Russell, Michael S. Tsirkin; +Cc: kvm

If we reset the virtio-blk device before the requests already dispatched
to the virtio-blk driver from the block layer are finised, we will stuck
in blk_cleanup_queue() and the remove will fail.

blk_cleanup_queue() calls blk_drain_queue() to drain all requests queued
before DEAD marking. However it will never success if the device is
already stopped. We'll have q->in_flight[] > 0, so the drain will not
finish.

How to reproduce the race:
1. hot-plug a virtio-blk device
2. keep reading/writing the device in guest
3. hot-unplug while the device is busy serving I/O

Signed-off-by: Asias He <asias@redhat.com>
---
 drivers/block/virtio_blk.c |   26 ++++++++++++++++++++++----
 1 file changed, 22 insertions(+), 4 deletions(-)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 72fe55d..72b818b 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -46,6 +46,9 @@ struct virtio_blk
 	/* Ida index - used to track minor number allocations. */
 	int index;
 
+	/* Number of pending requests dispatched to driver. */
+	int req_in_flight;
+
 	/* Scatterlist: can be too big for stack. */
 	struct scatterlist sg[/*sg_elems*/];
 };
@@ -95,6 +98,7 @@ static void blk_done(struct virtqueue *vq)
 		}
 
 		__blk_end_request_all(vbr->req, error);
+		vblk->req_in_flight--;
 		mempool_free(vbr, vblk->pool);
 	}
 	/* In case queue is stopped waiting for more buffers. */
@@ -190,6 +194,7 @@ static void do_virtblk_request(struct request_queue *q)
 
 	while ((req = blk_peek_request(q)) != NULL) {
 		BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
+		vblk->req_in_flight++;
 
 		/* If this request fails, stop queue and wait for something to
 		   finish to restart it. */
@@ -443,7 +448,7 @@ static int __devinit virtblk_probe(struct virtio_device *vdev)
 	if (err)
 		goto out_free_vblk;
 
-	vblk->pool = mempool_create_kmalloc_pool(1,sizeof(struct virtblk_req));
+	vblk->pool = mempool_create_kmalloc_pool(1, sizeof(struct virtblk_req));
 	if (!vblk->pool) {
 		err = -ENOMEM;
 		goto out_free_vq;
@@ -466,6 +471,7 @@ static int __devinit virtblk_probe(struct virtio_device *vdev)
 
 	virtblk_name_format("vd", index, vblk->disk->disk_name, DISK_NAME_LEN);
 
+	vblk->req_in_flight = 0;
 	vblk->disk->major = major;
 	vblk->disk->first_minor = index_to_minor(index);
 	vblk->disk->private_data = vblk;
@@ -576,22 +582,34 @@ static void __devexit virtblk_remove(struct virtio_device *vdev)
 {
 	struct virtio_blk *vblk = vdev->priv;
 	int index = vblk->index;
+	unsigned long flags;
+	int req_in_flight;
 
 	/* Prevent config work handler from accessing the device. */
 	mutex_lock(&vblk->config_lock);
 	vblk->config_enable = false;
 	mutex_unlock(&vblk->config_lock);
 
+	/* Abort all request on the queue. */
+	blk_abort_queue(vblk->disk->queue);
+	del_gendisk(vblk->disk);
+
 	/* Stop all the virtqueues. */
 	vdev->config->reset(vdev);
-
+	vdev->config->del_vqs(vdev);
 	flush_work(&vblk->config_work);
 
-	del_gendisk(vblk->disk);
+	/* Wait requests dispatched to device driver to finish. */
+	do {
+		spin_lock_irqsave(&vblk->lock, flags);
+		req_in_flight = vblk->req_in_flight;
+		spin_unlock_irqrestore(&vblk->lock, flags);
+	} while (req_in_flight != 0);
+
 	blk_cleanup_queue(vblk->disk->queue);
 	put_disk(vblk->disk);
+
 	mempool_destroy(vblk->pool);
-	vdev->config->del_vqs(vdev);
 	kfree(vblk);
 	ida_simple_remove(&vd_index_ida, index);
 }
-- 
1.7.10

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2] virtio-blk: Fix hot-unplug race in remove method
  2012-05-03  2:19 [PATCH 1/2] virtio-blk: Fix hot-unplug race in remove method Asias He
@ 2012-05-03  5:02 ` Sasha Levin
  2012-05-03  5:27 ` Michael S. Tsirkin
       [not found] ` <CA+1xoqcKQmSQxvUR6syvseUow4AfcnKmDODW=j6t6gejmSi4NA@mail.gmail.com>
  2 siblings, 0 replies; 5+ messages in thread
From: Sasha Levin @ 2012-05-03  5:02 UTC (permalink / raw)
  To: Asias He; +Cc: Michael S. Tsirkin, kvm, virtualization

On Thu, May 3, 2012 at 4:19 AM, Asias He <asias@redhat.com> wrote:
> @@ -190,6 +194,7 @@ static void do_virtblk_request(struct request_queue *q)
>
>        while ((req = blk_peek_request(q)) != NULL) {
>                BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
> +               vblk->req_in_flight++;
>
>                /* If this request fails, stop queue and wait for something to
>                   finish to restart it. */

This is being increased before we know if the request will actually be
sent, so if do_req() fails afterwards, req_in_flight would be
increased but the request will never be sent.

Which means we won't be able to unplug the device ever.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2] virtio-blk: Fix hot-unplug race in remove method
  2012-05-03  2:19 [PATCH 1/2] virtio-blk: Fix hot-unplug race in remove method Asias He
  2012-05-03  5:02 ` Sasha Levin
@ 2012-05-03  5:27 ` Michael S. Tsirkin
  2012-05-03  7:38   ` Asias He
       [not found] ` <CA+1xoqcKQmSQxvUR6syvseUow4AfcnKmDODW=j6t6gejmSi4NA@mail.gmail.com>
  2 siblings, 1 reply; 5+ messages in thread
From: Michael S. Tsirkin @ 2012-05-03  5:27 UTC (permalink / raw)
  To: Asias He; +Cc: kvm, virtualization

On Thu, May 03, 2012 at 10:19:52AM +0800, Asias He wrote:
> If we reset the virtio-blk device before the requests already dispatched
> to the virtio-blk driver from the block layer are finised, we will stuck
> in blk_cleanup_queue() and the remove will fail.
> 
> blk_cleanup_queue() calls blk_drain_queue() to drain all requests queued
> before DEAD marking. However it will never success if the device is
> already stopped. We'll have q->in_flight[] > 0, so the drain will not
> finish.
> 
> How to reproduce the race:
> 1. hot-plug a virtio-blk device
> 2. keep reading/writing the device in guest
> 3. hot-unplug while the device is busy serving I/O
> 
> Signed-off-by: Asias He <asias@redhat.com>

We used to do similar tracking in -net but dropped it all by using the
tracking that virtio core does.  Can't blk do the same?  Isn't there
some way to use virtqueue_detach_unused_buf for this instead?

> ---
>  drivers/block/virtio_blk.c |   26 ++++++++++++++++++++++----
>  1 file changed, 22 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index 72fe55d..72b818b 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -46,6 +46,9 @@ struct virtio_blk
>  	/* Ida index - used to track minor number allocations. */
>  	int index;
>  
> +	/* Number of pending requests dispatched to driver. */
> +	int req_in_flight;
> +
>  	/* Scatterlist: can be too big for stack. */
>  	struct scatterlist sg[/*sg_elems*/];
>  };
> @@ -95,6 +98,7 @@ static void blk_done(struct virtqueue *vq)
>  		}
>  
>  		__blk_end_request_all(vbr->req, error);
> +		vblk->req_in_flight--;
>  		mempool_free(vbr, vblk->pool);
>  	}
>  	/* In case queue is stopped waiting for more buffers. */
> @@ -190,6 +194,7 @@ static void do_virtblk_request(struct request_queue *q)
>  
>  	while ((req = blk_peek_request(q)) != NULL) {
>  		BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
> +		vblk->req_in_flight++;
>  
>  		/* If this request fails, stop queue and wait for something to
>  		   finish to restart it. */
> @@ -443,7 +448,7 @@ static int __devinit virtblk_probe(struct virtio_device *vdev)
>  	if (err)
>  		goto out_free_vblk;
>  
> -	vblk->pool = mempool_create_kmalloc_pool(1,sizeof(struct virtblk_req));
> +	vblk->pool = mempool_create_kmalloc_pool(1, sizeof(struct virtblk_req));
>  	if (!vblk->pool) {
>  		err = -ENOMEM;
>  		goto out_free_vq;
> @@ -466,6 +471,7 @@ static int __devinit virtblk_probe(struct virtio_device *vdev)
>  
>  	virtblk_name_format("vd", index, vblk->disk->disk_name, DISK_NAME_LEN);
>  
> +	vblk->req_in_flight = 0;
>  	vblk->disk->major = major;
>  	vblk->disk->first_minor = index_to_minor(index);
>  	vblk->disk->private_data = vblk;
> @@ -576,22 +582,34 @@ static void __devexit virtblk_remove(struct virtio_device *vdev)
>  {
>  	struct virtio_blk *vblk = vdev->priv;
>  	int index = vblk->index;
> +	unsigned long flags;
> +	int req_in_flight;
>  
>  	/* Prevent config work handler from accessing the device. */
>  	mutex_lock(&vblk->config_lock);
>  	vblk->config_enable = false;
>  	mutex_unlock(&vblk->config_lock);
>  
> +	/* Abort all request on the queue. */
> +	blk_abort_queue(vblk->disk->queue);
> +	del_gendisk(vblk->disk);
> +
>  	/* Stop all the virtqueues. */
>  	vdev->config->reset(vdev);
> -
> +	vdev->config->del_vqs(vdev);
>  	flush_work(&vblk->config_work);
>  
> -	del_gendisk(vblk->disk);
> +	/* Wait requests dispatched to device driver to finish. */
> +	do {
> +		spin_lock_irqsave(&vblk->lock, flags);
> +		req_in_flight = vblk->req_in_flight;
> +		spin_unlock_irqrestore(&vblk->lock, flags);
> +	} while (req_in_flight != 0);
> +
>  	blk_cleanup_queue(vblk->disk->queue);
>  	put_disk(vblk->disk);
> +
>  	mempool_destroy(vblk->pool);
> -	vdev->config->del_vqs(vdev);
>  	kfree(vblk);
>  	ida_simple_remove(&vd_index_ida, index);
>  }
> -- 
> 1.7.10

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2] virtio-blk: Fix hot-unplug race in remove method
       [not found] ` <CA+1xoqcKQmSQxvUR6syvseUow4AfcnKmDODW=j6t6gejmSi4NA@mail.gmail.com>
@ 2012-05-03  7:36   ` Asias He
  0 siblings, 0 replies; 5+ messages in thread
From: Asias He @ 2012-05-03  7:36 UTC (permalink / raw)
  To: Sasha Levin; +Cc: Michael S. Tsirkin, kvm, virtualization

On 05/03/2012 01:02 PM, Sasha Levin wrote:
> On Thu, May 3, 2012 at 4:19 AM, Asias He<asias@redhat.com>  wrote:
>> @@ -190,6 +194,7 @@ static void do_virtblk_request(struct request_queue *q)
>>
>>         while ((req = blk_peek_request(q)) != NULL) {
>>                 BUG_ON(req->nr_phys_segments + 2>  vblk->sg_elems);
>> +               vblk->req_in_flight++;
>>
>>                 /* If this request fails, stop queue and wait for something to
>>                    finish to restart it. */
>
> This is being increased before we know if the request will actually be
> sent, so if do_req() fails afterwards, req_in_flight would be
> increased but the request will never be sent.
>
> Which means we won't be able to unplug the device ever.

Yes, you are right. This introduces another race. I could do 
vblk->req_in_flight++ right after blk_start_request(req) to avoid this 
race.

> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


-- 
Asias

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2] virtio-blk: Fix hot-unplug race in remove method
  2012-05-03  5:27 ` Michael S. Tsirkin
@ 2012-05-03  7:38   ` Asias He
  0 siblings, 0 replies; 5+ messages in thread
From: Asias He @ 2012-05-03  7:38 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: kvm, virtualization

On 05/03/2012 01:27 PM, Michael S. Tsirkin wrote:
> On Thu, May 03, 2012 at 10:19:52AM +0800, Asias He wrote:
>> If we reset the virtio-blk device before the requests already dispatched
>> to the virtio-blk driver from the block layer are finised, we will stuck
>> in blk_cleanup_queue() and the remove will fail.
>>
>> blk_cleanup_queue() calls blk_drain_queue() to drain all requests queued
>> before DEAD marking. However it will never success if the device is
>> already stopped. We'll have q->in_flight[]>  0, so the drain will not
>> finish.
>>
>> How to reproduce the race:
>> 1. hot-plug a virtio-blk device
>> 2. keep reading/writing the device in guest
>> 3. hot-unplug while the device is busy serving I/O
>>
>> Signed-off-by: Asias He<asias@redhat.com>
>
> We used to do similar tracking in -net but dropped it all by using the
> tracking that virtio core does.  Can't blk do the same?  Isn't there
> some way to use virtqueue_detach_unused_buf for this instead?

It is much simpler to use virtqueue_detach_unused_buf. Thanks, Michael.

>> ---
>>   drivers/block/virtio_blk.c |   26 ++++++++++++++++++++++----
>>   1 file changed, 22 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
>> index 72fe55d..72b818b 100644
>> --- a/drivers/block/virtio_blk.c
>> +++ b/drivers/block/virtio_blk.c
>> @@ -46,6 +46,9 @@ struct virtio_blk
>>   	/* Ida index - used to track minor number allocations. */
>>   	int index;
>>
>> +	/* Number of pending requests dispatched to driver. */
>> +	int req_in_flight;
>> +
>>   	/* Scatterlist: can be too big for stack. */
>>   	struct scatterlist sg[/*sg_elems*/];
>>   };
>> @@ -95,6 +98,7 @@ static void blk_done(struct virtqueue *vq)
>>   		}
>>
>>   		__blk_end_request_all(vbr->req, error);
>> +		vblk->req_in_flight--;
>>   		mempool_free(vbr, vblk->pool);
>>   	}
>>   	/* In case queue is stopped waiting for more buffers. */
>> @@ -190,6 +194,7 @@ static void do_virtblk_request(struct request_queue *q)
>>
>>   	while ((req = blk_peek_request(q)) != NULL) {
>>   		BUG_ON(req->nr_phys_segments + 2>  vblk->sg_elems);
>> +		vblk->req_in_flight++;
>>
>>   		/* If this request fails, stop queue and wait for something to
>>   		   finish to restart it. */
>> @@ -443,7 +448,7 @@ static int __devinit virtblk_probe(struct virtio_device *vdev)
>>   	if (err)
>>   		goto out_free_vblk;
>>
>> -	vblk->pool = mempool_create_kmalloc_pool(1,sizeof(struct virtblk_req));
>> +	vblk->pool = mempool_create_kmalloc_pool(1, sizeof(struct virtblk_req));
>>   	if (!vblk->pool) {
>>   		err = -ENOMEM;
>>   		goto out_free_vq;
>> @@ -466,6 +471,7 @@ static int __devinit virtblk_probe(struct virtio_device *vdev)
>>
>>   	virtblk_name_format("vd", index, vblk->disk->disk_name, DISK_NAME_LEN);
>>
>> +	vblk->req_in_flight = 0;
>>   	vblk->disk->major = major;
>>   	vblk->disk->first_minor = index_to_minor(index);
>>   	vblk->disk->private_data = vblk;
>> @@ -576,22 +582,34 @@ static void __devexit virtblk_remove(struct virtio_device *vdev)
>>   {
>>   	struct virtio_blk *vblk = vdev->priv;
>>   	int index = vblk->index;
>> +	unsigned long flags;
>> +	int req_in_flight;
>>
>>   	/* Prevent config work handler from accessing the device. */
>>   	mutex_lock(&vblk->config_lock);
>>   	vblk->config_enable = false;
>>   	mutex_unlock(&vblk->config_lock);
>>
>> +	/* Abort all request on the queue. */
>> +	blk_abort_queue(vblk->disk->queue);
>> +	del_gendisk(vblk->disk);
>> +
>>   	/* Stop all the virtqueues. */
>>   	vdev->config->reset(vdev);
>> -
>> +	vdev->config->del_vqs(vdev);
>>   	flush_work(&vblk->config_work);
>>
>> -	del_gendisk(vblk->disk);
>> +	/* Wait requests dispatched to device driver to finish. */
>> +	do {
>> +		spin_lock_irqsave(&vblk->lock, flags);
>> +		req_in_flight = vblk->req_in_flight;
>> +		spin_unlock_irqrestore(&vblk->lock, flags);
>> +	} while (req_in_flight != 0);
>> +
>>   	blk_cleanup_queue(vblk->disk->queue);
>>   	put_disk(vblk->disk);
>> +
>>   	mempool_destroy(vblk->pool);
>> -	vdev->config->del_vqs(vdev);
>>   	kfree(vblk);
>>   	ida_simple_remove(&vd_index_ida, index);
>>   }
>> --
>> 1.7.10
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


-- 
Asias

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2012-05-03  7:38 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-05-03  2:19 [PATCH 1/2] virtio-blk: Fix hot-unplug race in remove method Asias He
2012-05-03  5:02 ` Sasha Levin
2012-05-03  5:27 ` Michael S. Tsirkin
2012-05-03  7:38   ` Asias He
     [not found] ` <CA+1xoqcKQmSQxvUR6syvseUow4AfcnKmDODW=j6t6gejmSi4NA@mail.gmail.com>
2012-05-03  7:36   ` Asias He

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox