public inbox for linux-block@vger.kernel.org
 help / color / mirror / Atom feed
  • * Re: [PATCH v1] virtio_blk: Fix disk deletion hang on device surprise removal
           [not found] <20250521062744.1361774-1-parav@nvidia.com>
           [not found] ` <20250521041506-mutt-send-email-mst@kernel.org>
    @ 2025-05-21 14:56 ` Stefan Hajnoczi
      2025-05-22  2:57   ` Parav Pandit
      1 sibling, 1 reply; 16+ messages in thread
    From: Stefan Hajnoczi @ 2025-05-21 14:56 UTC (permalink / raw)
      To: Parav Pandit
      Cc: mst, axboe, virtualization, linux-block, stable, lirongqing, kch,
    	xuanzhuo, pbonzini, jasowang, Max Gurtovoy, Israel Rukshin
    
    [-- Attachment #1: Type: text/plain, Size: 6519 bytes --]
    
    On Wed, May 21, 2025 at 06:37:41AM +0000, Parav Pandit wrote:
    > When the PCI device is surprise removed, requests may not complete
    > the device as the VQ is marked as broken. Due to this, the disk
    > deletion hangs.
    > 
    > Fix it by aborting the requests when the VQ is broken.
    > 
    > With this fix now fio completes swiftly.
    > An alternative of IO timeout has been considered, however
    > when the driver knows about unresponsive block device, swiftly clearing
    > them enables users and upper layers to react quickly.
    > 
    > Verified with multiple device unplug iterations with pending requests in
    > virtio used ring and some pending with the device.
    > 
    > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio pci device")
    > Cc: stable@vger.kernel.org
    > Reported-by: lirongqing@baidu.com
    > Closes: https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b4741@baidu.com/
    > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
    > Reviewed-by: Israel Rukshin <israelr@nvidia.com>
    > Signed-off-by: Parav Pandit <parav@nvidia.com>
    > ---
    > changelog:
    > v0->v1:
    > - Fixed comments from Stefan to rename a cleanup function
    > - Improved logic for handling any outstanding requests
    >   in bio layer
    > - improved cancel callback to sync with ongoing done()
    > 
    > ---
    >  drivers/block/virtio_blk.c | 95 ++++++++++++++++++++++++++++++++++++++
    >  1 file changed, 95 insertions(+)
    > 
    > diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
    > index 7cffea01d868..5212afdbd3c7 100644
    > --- a/drivers/block/virtio_blk.c
    > +++ b/drivers/block/virtio_blk.c
    > @@ -435,6 +435,13 @@ static blk_status_t virtio_queue_rq(struct blk_mq_hw_ctx *hctx,
    >  	blk_status_t status;
    >  	int err;
    >  
    > +	/* Immediately fail all incoming requests if the vq is broken.
    > +	 * Once the queue is unquiesced, upper block layer flushes any pending
    > +	 * queued requests; fail them right away.
    > +	 */
    > +	if (unlikely(virtqueue_is_broken(vblk->vqs[qid].vq)))
    > +		return BLK_STS_IOERR;
    > +
    >  	status = virtblk_prep_rq(hctx, vblk, req, vbr);
    >  	if (unlikely(status))
    >  		return status;
    > @@ -508,6 +515,11 @@ static void virtio_queue_rqs(struct rq_list *rqlist)
    >  	while ((req = rq_list_pop(rqlist))) {
    >  		struct virtio_blk_vq *this_vq = get_virtio_blk_vq(req->mq_hctx);
    >  
    > +		if (unlikely(virtqueue_is_broken(this_vq->vq))) {
    > +			rq_list_add_tail(&requeue_list, req);
    > +			continue;
    > +		}
    > +
    >  		if (vq && vq != this_vq)
    >  			virtblk_add_req_batch(vq, &submit_list);
    >  		vq = this_vq;
    > @@ -1554,6 +1566,87 @@ static int virtblk_probe(struct virtio_device *vdev)
    >  	return err;
    >  }
    >  
    > +static bool virtblk_request_cancel(struct request *rq, void *data)
    > +{
    > +	struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
    > +	struct virtio_blk *vblk = data;
    > +	struct virtio_blk_vq *vq;
    > +	unsigned long flags;
    > +
    > +	vq = &vblk->vqs[rq->mq_hctx->queue_num];
    > +
    > +	spin_lock_irqsave(&vq->lock, flags);
    > +
    > +	vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
    > +	if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq))
    > +		blk_mq_complete_request(rq);
    > +
    > +	spin_unlock_irqrestore(&vq->lock, flags);
    > +	return true;
    > +}
    > +
    > +static void virtblk_broken_device_cleanup(struct virtio_blk *vblk)
    > +{
    > +	struct request_queue *q = vblk->disk->queue;
    > +
    > +	if (!virtqueue_is_broken(vblk->vqs[0].vq))
    > +		return;
    
    Can a subset of virtqueues be broken? If so, then this code doesn't
    handle it.
    
    > +
    > +	/* Start freezing the queue, so that new requests keeps waitng at the
    
    s/waitng/waiting/
    
    > +	 * door of bio_queue_enter(). We cannot fully freeze the queue because
    > +	 * freezed queue is an empty queue and there are pending requests, so
    > +	 * only start freezing it.
    > +	 */
    > +	blk_freeze_queue_start(q);
    > +
    > +	/* When quiescing completes, all ongoing dispatches have completed
    > +	 * and no new dispatch will happen towards the driver.
    > +	 * This ensures that later when cancel is attempted, then are not
    > +	 * getting processed by the queue_rq() or queue_rqs() handlers.
    > +	 */
    > +	blk_mq_quiesce_queue(q);
    > +
    > +	/*
    > +	 * Synchronize with any ongoing VQ callbacks, effectively quiescing
    > +	 * the device and preventing it from completing further requests
    > +	 * to the block layer. Any outstanding, incomplete requests will be
    > +	 * completed by virtblk_request_cancel().
    > +	 */
    > +	virtio_synchronize_cbs(vblk->vdev);
    > +
    > +	/* At this point, no new requests can enter the queue_rq() and
    > +	 * completion routine will not complete any new requests either for the
    > +	 * broken vq. Hence, it is safe to cancel all requests which are
    > +	 * started.
    > +	 */
    > +	blk_mq_tagset_busy_iter(&vblk->tag_set, virtblk_request_cancel, vblk);
    
    Although virtio_synchronize_cbs() was called, a broken/malicious device
    can still raise IRQs. Would that lead to use-after-free or similar
    undefined behavior for requests that have been submitted to the device?
    
    It seems safer to reset the device before marking the requests as
    failed.
    
    > +	blk_mq_tagset_wait_completed_request(&vblk->tag_set);
    > +
    > +	/* All pending requests are cleaned up. Time to resume so that disk
    > +	 * deletion can be smooth. Start the HW queues so that when queue is
    > +	 * unquiesced requests can again enter the driver.
    > +	 */
    > +	blk_mq_start_stopped_hw_queues(q, true);
    > +
    > +	/* Unquiescing will trigger dispatching any pending requests to the
    > +	 * driver which has crossed bio_queue_enter() to the driver.
    > +	 */
    > +	blk_mq_unquiesce_queue(q);
    > +
    > +	/* Wait for all pending dispatches to terminate which may have been
    > +	 * initiated after unquiescing.
    > +	 */
    > +	blk_mq_freeze_queue_wait(q);
    > +
    > +	/* Mark the disk dead so that once queue unfreeze, the requests
    > +	 * waiting at the door of bio_queue_enter() can be aborted right away.
    > +	 */
    > +	blk_mark_disk_dead(vblk->disk);
    > +
    > +	/* Unfreeze the queue so that any waiting requests will be aborted. */
    > +	blk_mq_unfreeze_queue_nomemrestore(q);
    > +}
    > +
    >  static void virtblk_remove(struct virtio_device *vdev)
    >  {
    >  	struct virtio_blk *vblk = vdev->priv;
    > @@ -1561,6 +1654,8 @@ static void virtblk_remove(struct virtio_device *vdev)
    >  	/* Make sure no work handler is accessing the device. */
    >  	flush_work(&vblk->config_work);
    >  
    > +	virtblk_broken_device_cleanup(vblk);
    > +
    >  	del_gendisk(vblk->disk);
    >  	blk_mq_free_tag_set(&vblk->tag_set);
    >  
    > -- 
    > 2.34.1
    > 
    
    [-- Attachment #2: signature.asc --]
    [-- Type: application/pgp-signature, Size: 488 bytes --]
    
    ^ permalink raw reply	[flat|nested] 16+ messages in thread

  • end of thread, other threads:[~2025-05-26 13:29 UTC | newest]
    
    Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
    -- links below jump to the message on this page --
         [not found] <20250521062744.1361774-1-parav@nvidia.com>
         [not found] ` <20250521041506-mutt-send-email-mst@kernel.org>
         [not found]   ` <CY8PR12MB7195F56A84CAF0D486B82239DC9EA@CY8PR12MB7195.namprd12.prod.outlook.com>
    2025-05-21  9:18     ` [PATCH v1] virtio_blk: Fix disk deletion hang on device surprise removal Parav Pandit
    2025-05-21  9:22       ` Michael S. Tsirkin
         [not found]     ` <20250521051556-mutt-send-email-mst@kernel.org>
    2025-05-21  9:32       ` Parav Pandit
    2025-05-21 10:16         ` Michael S. Tsirkin
    2025-05-21 10:34           ` Parav Pandit
    2025-05-21 10:43             ` Michael S. Tsirkin
    2025-05-21 12:40               ` Parav Pandit
    2025-05-21 16:49                 ` Michael S. Tsirkin
    2025-05-21 14:56 ` Stefan Hajnoczi
    2025-05-22  2:57   ` Parav Pandit
    2025-05-22 14:36     ` Stefan Hajnoczi
    2025-05-22 14:55       ` Parav Pandit
    2025-05-22 18:12         ` Stefan Hajnoczi
    2025-05-22 18:58           ` Parav Pandit
    2025-05-26  9:23         ` Parav Pandit
    2025-05-26 13:29           ` Stefan Hajnoczi
    

    This is a public inbox, see mirroring instructions
    for how to clone and mirror all data and code used for this inbox