public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Asias He <asias@redhat.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org
Subject: Re: [PATCH 2/3] virtio-blk: Reset device after blk_cleanup_queue()
Date: Fri, 25 May 2012 15:03:16 +0800	[thread overview]
Message-ID: <4FBF2EB4.3020702@redhat.com> (raw)
In-Reply-To: <20120525065229.GD15474@redhat.com>

On 05/25/2012 02:52 PM, Michael S. Tsirkin wrote:
> On Fri, May 25, 2012 at 10:34:48AM +0800, Asias He wrote:
>> blk_cleanup_queue() will call blk_drian_queue() to drain all the
>> requests before queue DEAD marking. If we reset the device before
>> blk_cleanup_queue() the drain would fail.
>>
>> 1) if the queue is stopped in do_virtblk_request() because device is
>> full, the q->request_fn() will not be called.
>>
>> blk_drain_queue() {
>>     while(true) {
>>        ...
>>        if (!list_empty(&q->queue_head))
>>          __blk_run_queue(q) {
>> 	    if (queue is not stoped)
>> 		q->request_fn()
>> 	}
>>        ...
>>     }
>> }
>>
>> Do no reset the device before blk_cleanup_queue() gives the chance to
>> start the queue in interrupt handler blk_done().
>>
>> 2) In commit b79d866c8b7014a51f611a64c40546109beaf24a, We abort requests
>> dispatched to driver before blk_cleanup_queue(). There is a race if
>> requests are dispatched to driver after the abort and before the queue
>> DEAD mark. To fix this, instead of aborting the requests explicitly, we
>> can just reset the device after after blk_cleanup_queue so that the
>> device can complete all the requests before queue DEAD marking in the
>> drain process.
>>
>> Cc: Rusty Russell<rusty@rustcorp.com.au>
>> Cc: "Michael S. Tsirkin"<mst@redhat.com>
>> Cc: virtualization@lists.linux-foundation.org
>> Cc: kvm@vger.kernel.org
>> Signed-off-by: Asias He<asias@redhat.com>
>> ---
>>   drivers/block/virtio_blk.c |   12 +-----------
>>   1 file changed, 1 insertion(+), 11 deletions(-)
>>
>> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
>> index 1bed517..b4fa2d7 100644
>> --- a/drivers/block/virtio_blk.c
>> +++ b/drivers/block/virtio_blk.c
>> @@ -576,8 +576,6 @@ static void __devexit virtblk_remove(struct virtio_device *vdev)
>>   {
>>   	struct virtio_blk *vblk = vdev->priv;
>>   	int index = vblk->index;
>> -	struct virtblk_req *vbr;
>> -	unsigned long flags;
>>
>>   	/* Prevent config work handler from accessing the device. */
>>   	mutex_lock(&vblk->config_lock);
>> @@ -585,21 +583,13 @@ static void __devexit virtblk_remove(struct virtio_device *vdev)
>>   	mutex_unlock(&vblk->config_lock);
>>
>>   	del_gendisk(vblk->disk);
>> +	blk_cleanup_queue(vblk->disk->queue);
>
> Device is not reset here yet, so it will process
> some requests in parallel with blk_cleanup_queue.
> Is this a problem? Why not?

No. This is not a problem. Actually, blk_cleanup_queue assumes the 
device can process the requests before queue DEAD marking. Otherwise the 
drain in the blk_cleanup_queue would never success.

>
>>
>>   	/* Stop all the virtqueues. */
>>   	vdev->config->reset(vdev);
>>
>>   	flush_work(&vblk->config_work);
>>
>> -	/* Abort requests dispatched to driver. */
>> -	spin_lock_irqsave(&vblk->lock, flags);
>> -	while ((vbr = virtqueue_detach_unused_buf(vblk->vq))) {
>> -		__blk_end_request_all(vbr->req, -EIO);
>> -		mempool_free(vbr, vblk->pool);
>> -	}
>> -	spin_unlock_irqrestore(&vblk->lock, flags);
>> -
>> -	blk_cleanup_queue(vblk->disk->queue);
>>   	put_disk(vblk->disk);
>>   	mempool_destroy(vblk->pool);
>>   	vdev->config->del_vqs(vdev);
>> --
>> 1.7.10.2


-- 
Asias

  reply	other threads:[~2012-05-25  7:03 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-05-25  2:34 [PATCH 0/3] Fix hot-unplug race in virtio-blk Asias He
2012-05-25  2:34 ` [PATCH 1/3] virtio-blk: Call del_gendisk() before disable guest kick Asias He
2012-05-25  7:07   ` Michael S. Tsirkin
2012-05-25  2:34 ` [PATCH 2/3] virtio-blk: Reset device after blk_cleanup_queue() Asias He
2012-05-25  6:52   ` Michael S. Tsirkin
2012-05-25  7:03     ` Asias He [this message]
2012-05-25  7:06       ` Michael S. Tsirkin
2012-05-25  7:08   ` Michael S. Tsirkin
2012-05-25  2:34 ` [PATCH 3/3] virtio-blk: Use block layer provided spinlock Asias He
2012-05-25  7:02   ` Michael S. Tsirkin
2012-05-25  7:23     ` Asias He
2012-05-25  8:03     ` [PATCH v2 " Asias He
2012-05-25 13:10       ` Michael S. Tsirkin
2012-06-01  8:49       ` Michael S. Tsirkin
2012-06-04  0:41 ` [PATCH 0/3] Fix hot-unplug race in virtio-blk Rusty Russell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4FBF2EB4.3020702@redhat.com \
    --to=asias@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox