From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Michael S. Tsirkin" Subject: Re: [PATCH v2] virtio-blk: Fix hot-unplug race in remove method Date: Thu, 3 May 2012 10:56:35 +0300 Message-ID: <20120503075634.GK8266@redhat.com> References: <1336030247-16323-1-git-send-email-asias@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: <1336030247-16323-1-git-send-email-asias@redhat.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: virtualization-bounces@lists.linux-foundation.org Errors-To: virtualization-bounces@lists.linux-foundation.org To: Asias He Cc: kvm@vger.kernel.org, virtualization@lists.linux-foundation.org List-Id: virtualization@lists.linuxfoundation.org On Thu, May 03, 2012 at 03:30:47PM +0800, Asias He wrote: > If we reset the virtio-blk device before the requests already dispatched > to the virtio-blk driver from the block layer are finised, we will stuck > in blk_cleanup_queue() and the remove will fail. > > blk_cleanup_queue() calls blk_drain_queue() to drain all requests queued > before DEAD marking. However it will never success if the device is > already stopped. We'll have q->in_flight[] > 0, so the drain will not > finish. > > How to reproduce the race: > 1. hot-plug a virtio-blk device > 2. keep reading/writing the device in guest > 3. hot-unplug while the device is busy serving I/O > > Changes in v2: > - Drop req_in_flight > - Use virtqueue_detach_unused_buf to get request dispatched to driver > > Signed-off-by: Asias He > --- > drivers/block/virtio_blk.c | 16 +++++++++++++--- > 1 file changed, 13 insertions(+), 3 deletions(-) > > diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c > index 72fe55d..670c28f 100644 > --- a/drivers/block/virtio_blk.c > +++ b/drivers/block/virtio_blk.c > @@ -443,7 +443,7 @@ static int __devinit virtblk_probe(struct virtio_device *vdev) > if (err) > goto out_free_vblk; > > - vblk->pool = mempool_create_kmalloc_pool(1,sizeof(struct virtblk_req)); > + vblk->pool = mempool_create_kmalloc_pool(1, sizeof(struct virtblk_req)); > if (!vblk->pool) { > err = -ENOMEM; > goto out_free_vq; Would be a bit easier to review if whitespace changes are avoided, and done in a separate patch targeting 3.5. > @@ -576,20 +576,30 @@ static void __devexit virtblk_remove(struct virtio_device *vdev) > { > struct virtio_blk *vblk = vdev->priv; > int index = vblk->index; > + struct virtblk_req *vbr; > > /* Prevent config work handler from accessing the device. */ > mutex_lock(&vblk->config_lock); > vblk->config_enable = false; > mutex_unlock(&vblk->config_lock); > > + /* Abort all request on the queue. */ All requests Also, the comment isn't really helpful. Want to explain why we abort them all here? Won't they be drained by later detach code? > + blk_abort_queue(vblk->disk->queue); > + del_gendisk(vblk->disk); > + > /* Stop all the virtqueues. */ > vdev->config->reset(vdev); > - > flush_work(&vblk->config_work); > > - del_gendisk(vblk->disk); Is there a reason you move del_gendisk to before reset? Is it safe to del_gendisk while we might still be getting callbacks from the device? > + /* Abort request dispatched to driver. */ > + while ((vbr = virtqueue_detach_unused_buf(vblk->vq))) { > + blk_abort_request(vbr->req); > + mempool_free(vbr, vblk->pool); > + } > + > blk_cleanup_queue(vblk->disk->queue); > put_disk(vblk->disk); > + > mempool_destroy(vblk->pool); > vdev->config->del_vqs(vdev); > kfree(vblk); > -- > 1.7.10