* [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal
@ 2025-06-02 2:44 Parav Pandit
2025-06-02 11:46 ` ALOK TIWARI
` (2 more replies)
0 siblings, 3 replies; 24+ messages in thread
From: Parav Pandit @ 2025-06-02 2:44 UTC (permalink / raw)
To: mst, stefanha, axboe, virtualization, linux-block
Cc: stable, lirongqing, kch, xuanzhuo, pbonzini, jasowang,
alok.a.tiwari, Parav Pandit, Max Gurtovoy, Israel Rukshin
When the PCI device is surprise removed, requests may not complete
the device as the VQ is marked as broken. Due to this, the disk
deletion hangs.
Fix it by aborting the requests when the VQ is broken.
With this fix now fio completes swiftly.
An alternative of IO timeout has been considered, however
when the driver knows about unresponsive block device, swiftly clearing
them enables users and upper layers to react quickly.
Verified with multiple device unplug iterations with pending requests in
virtio used ring and some pending with the device.
Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio pci device")
Cc: stable@vger.kernel.org
Reported-by: Li RongQing <lirongqing@baidu.com>
Closes: https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b4741@baidu.com/
Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
Reviewed-by: Israel Rukshin <israelr@nvidia.com>
Signed-off-by: Parav Pandit <parav@nvidia.com>
---
v4->v5:
- fixed comment style where comment to start with one empty line at start
- Addressed comments from Alok
- fixed typo in broken vq check
v3->v4:
- Addressed comments from Michael
- renamed virtblk_request_cancel() to
virtblk_complete_request_with_ioerr()
- Added comments for virtblk_complete_request_with_ioerr()
- Renamed virtblk_broken_device_cleanup() to
virtblk_cleanup_broken_device()
- Added comments for virtblk_cleanup_broken_device()
- Moved the broken vq check in virtblk_remove()
- Fixed comment style to have first empty line
- replaced freezed to frozen
- Fixed comments rephrased
v2->v3:
- Addressed comments from Michael
- updated comment for synchronizing with callbacks
v1->v2:
- Addressed comments from Stephan
- fixed spelling to 'waiting'
- Addressed comments from Michael
- Dropped checking broken vq from queue_rq() and queue_rqs()
because it is checked in lower layer routines in virtio core
v0->v1:
- Fixed comments from Stefan to rename a cleanup function
- Improved logic for handling any outstanding requests
in bio layer
- improved cancel callback to sync with ongoing done()
---
drivers/block/virtio_blk.c | 95 ++++++++++++++++++++++++++++++++++++++
1 file changed, 95 insertions(+)
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 7cffea01d868..c5e383c0ac48 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -1554,6 +1554,98 @@ static int virtblk_probe(struct virtio_device *vdev)
return err;
}
+/*
+ * If the vq is broken, device will not complete requests.
+ * So we do it for the device.
+ */
+static bool virtblk_complete_request_with_ioerr(struct request *rq, void *data)
+{
+ struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
+ struct virtio_blk *vblk = data;
+ struct virtio_blk_vq *vq;
+ unsigned long flags;
+
+ vq = &vblk->vqs[rq->mq_hctx->queue_num];
+
+ spin_lock_irqsave(&vq->lock, flags);
+
+ vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
+ if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq))
+ blk_mq_complete_request(rq);
+
+ spin_unlock_irqrestore(&vq->lock, flags);
+ return true;
+}
+
+/*
+ * If the device is broken, it will not use any buffers and waiting
+ * for that to happen is pointless. We'll do the cleanup in the driver,
+ * completing all requests for the device.
+ */
+static void virtblk_cleanup_broken_device(struct virtio_blk *vblk)
+{
+ struct request_queue *q = vblk->disk->queue;
+
+ /*
+ * Start freezing the queue, so that new requests keeps waiting at the
+ * door of bio_queue_enter(). We cannot fully freeze the queue because
+ * frozen queue is an empty queue and there are pending requests, so
+ * only start freezing it.
+ */
+ blk_freeze_queue_start(q);
+
+ /*
+ * When quiescing completes, all ongoing dispatches have completed
+ * and no new dispatch will happen towards the driver.
+ */
+ blk_mq_quiesce_queue(q);
+
+ /*
+ * Synchronize with any ongoing VQ callbacks that may have started
+ * before the VQs were marked as broken. Any outstanding requests
+ * will be completed by virtblk_complete_request_with_ioerr().
+ */
+ virtio_synchronize_cbs(vblk->vdev);
+
+ /*
+ * At this point, no new requests can enter the queue_rq() and
+ * completion routine will not complete any new requests either for the
+ * broken vq. Hence, it is safe to cancel all requests which are
+ * started.
+ */
+ blk_mq_tagset_busy_iter(&vblk->tag_set,
+ virtblk_complete_request_with_ioerr, vblk);
+ blk_mq_tagset_wait_completed_request(&vblk->tag_set);
+
+ /*
+ * All pending requests are cleaned up. Time to resume so that disk
+ * deletion can be smooth. Start the HW queues so that when queue is
+ * unquiesced requests can again enter the driver.
+ */
+ blk_mq_start_stopped_hw_queues(q, true);
+
+ /*
+ * Unquiescing will trigger dispatching any pending requests to the
+ * driver which has crossed bio_queue_enter() to the driver.
+ */
+ blk_mq_unquiesce_queue(q);
+
+ /*
+ * Wait for all pending dispatches to terminate which may have been
+ * initiated after unquiescing.
+ */
+ blk_mq_freeze_queue_wait(q);
+
+ /*
+ * Mark the disk dead so that once we unfreeze the queue, requests
+ * waiting at the door of bio_queue_enter() can be aborted right away.
+ */
+ blk_mark_disk_dead(vblk->disk);
+
+ /* Unfreeze the queue so that any waiting requests will be aborted. */
+ blk_mq_unfreeze_queue_nomemrestore(q);
+}
+
static void virtblk_remove(struct virtio_device *vdev)
{
struct virtio_blk *vblk = vdev->priv;
@@ -1561,6 +1653,9 @@ static void virtblk_remove(struct virtio_device *vdev)
/* Make sure no work handler is accessing the device. */
flush_work(&vblk->config_work);
+ if (virtqueue_is_broken(vblk->vqs[0].vq))
+ virtblk_cleanup_broken_device(vblk);
+
del_gendisk(vblk->disk);
blk_mq_free_tag_set(&vblk->tag_set);
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal
2025-06-02 2:44 [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal Parav Pandit
@ 2025-06-02 11:46 ` ALOK TIWARI
2025-06-18 9:29 ` Parav Pandit
2025-06-24 18:56 ` Stefan Hajnoczi
2 siblings, 0 replies; 24+ messages in thread
From: ALOK TIWARI @ 2025-06-02 11:46 UTC (permalink / raw)
To: Parav Pandit, mst, stefanha, axboe, virtualization, linux-block
Cc: stable, lirongqing, kch, xuanzhuo, pbonzini, jasowang,
Max Gurtovoy, Israel Rukshin
On 02-06-2025 08:14, Parav Pandit wrote:
> When the PCI device is surprise removed, requests may not complete
> the device as the VQ is marked as broken. Due to this, the disk
> deletion hangs.
>
> Fix it by aborting the requests when the VQ is broken.
>
> With this fix now fio completes swiftly.
> An alternative of IO timeout has been considered, however
> when the driver knows about unresponsive block device, swiftly clearing
> them enables users and upper layers to react quickly.
>
> Verified with multiple device unplug iterations with pending requests in
> virtio used ring and some pending with the device.
>
> Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio pci device")
> Cc:stable@vger.kernel.org
> Reported-by: Li RongQing<lirongqing@baidu.com>
> Closes:https://urldefense.com/v3/__https://lore.kernel.org/virtualization/
> c45dd68698cd47238c55fb73ca9b4741@baidu.com/__;!!ACWV5N9M2RV99hQ!
> OKusQ3sGH7nkVTy9MUDtaIS17UDtjabP3Lby9jTBdT3Ur38XLl2_DOFo0eVzx_dNeh16lD3Ss6ItE9eG$
> Reviewed-by: Max Gurtovoy<mgurtovoy@nvidia.com>
> Reviewed-by: Israel Rukshin<israelr@nvidia.com>
> Signed-off-by: Parav Pandit<parav@nvidia.com>
look good to me.
Reviewed-by: Alok Tiwari <alok.a.tiwari@oracle.com>
Thanks,
Alok
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal
2025-06-02 2:44 [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal Parav Pandit
2025-06-02 11:46 ` ALOK TIWARI
@ 2025-06-18 9:29 ` Parav Pandit
2025-06-24 18:56 ` Stefan Hajnoczi
2 siblings, 0 replies; 24+ messages in thread
From: Parav Pandit @ 2025-06-18 9:29 UTC (permalink / raw)
To: Parav Pandit, mst@redhat.com, stefanha@redhat.com,
axboe@kernel.dk, virtualization@lists.linux.dev,
linux-block@vger.kernel.org
Cc: stable@vger.kernel.org, NBU-Contact-Li Rongqing (EXTERNAL),
Chaitanya Kulkarni, xuanzhuo@linux.alibaba.com,
pbonzini@redhat.com, jasowang@redhat.com,
alok.a.tiwari@oracle.com, Max Gurtovoy, Israel Rukshin
Hi Michael, Stefan,
> From: Parav Pandit <parav@nvidia.com>
> Sent: 02 June 2025 08:15 AM
> To: mst@redhat.com; stefanha@redhat.com; axboe@kernel.dk;
> virtualization@lists.linux.dev; linux-block@vger.kernel.org
> Cc: stable@vger.kernel.org; NBU-Contact-Li Rongqing (EXTERNAL)
> <lirongqing@baidu.com>; Chaitanya Kulkarni <kch@nvidia.com>;
> xuanzhuo@linux.alibaba.com; pbonzini@redhat.com;
> jasowang@redhat.com; alok.a.tiwari@oracle.com; Parav Pandit
> <parav@nvidia.com>; Max Gurtovoy <mgurtovoy@nvidia.com>; Israel
> Rukshin <israelr@nvidia.com>
> Subject: [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise
> removal
>
> When the PCI device is surprise removed, requests may not complete the
> device as the VQ is marked as broken. Due to this, the disk deletion hangs.
>
> Fix it by aborting the requests when the VQ is broken.
>
> With this fix now fio completes swiftly.
> An alternative of IO timeout has been considered, however when the driver
> knows about unresponsive block device, swiftly clearing them enables users
> and upper layers to react quickly.
>
> Verified with multiple device unplug iterations with pending requests in virtio
> used ring and some pending with the device.
>
> Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio pci
> device")
> Cc: stable@vger.kernel.org
> Reported-by: Li RongQing <lirongqing@baidu.com>
> Closes:
> https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b4741
> @baidu.com/
> Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> Signed-off-by: Parav Pandit <parav@nvidia.com>
>
> ---
> v4->v5:
> - fixed comment style where comment to start with one empty line at start
> - Addressed comments from Alok
> - fixed typo in broken vq check
Did you get a chance to review this version where I fixed all the comments you proposed?
[..]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal
2025-06-02 2:44 [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal Parav Pandit
2025-06-02 11:46 ` ALOK TIWARI
2025-06-18 9:29 ` Parav Pandit
@ 2025-06-24 18:56 ` Stefan Hajnoczi
2025-06-24 19:01 ` Parav Pandit
` (2 more replies)
2 siblings, 3 replies; 24+ messages in thread
From: Stefan Hajnoczi @ 2025-06-24 18:56 UTC (permalink / raw)
To: Parav Pandit
Cc: mst, axboe, virtualization, linux-block, stable, lirongqing, kch,
xuanzhuo, pbonzini, jasowang, alok.a.tiwari, Max Gurtovoy,
Israel Rukshin
[-- Attachment #1: Type: text/plain, Size: 6944 bytes --]
On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit wrote:
> When the PCI device is surprise removed, requests may not complete
> the device as the VQ is marked as broken. Due to this, the disk
> deletion hangs.
There are loops in the core virtio driver code that expect device
register reads to eventually return 0:
drivers/virtio/virtio_pci_modern.c:vp_reset()
drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_queue_reset()
Is there a hang if these loops are hit when a device has been surprise
removed? I'm trying to understand whether surprise removal is fully
supported or whether this patch is one step in that direction.
Apart from that, I'm happy with the virtio_blk.c aspects of the patch:
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
>
> Fix it by aborting the requests when the VQ is broken.
>
> With this fix now fio completes swiftly.
> An alternative of IO timeout has been considered, however
> when the driver knows about unresponsive block device, swiftly clearing
> them enables users and upper layers to react quickly.
>
> Verified with multiple device unplug iterations with pending requests in
> virtio used ring and some pending with the device.
>
> Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio pci device")
> Cc: stable@vger.kernel.org
> Reported-by: Li RongQing <lirongqing@baidu.com>
> Closes: https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b4741@baidu.com/
> Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> Signed-off-by: Parav Pandit <parav@nvidia.com>
>
> ---
> v4->v5:
> - fixed comment style where comment to start with one empty line at start
> - Addressed comments from Alok
> - fixed typo in broken vq check
> v3->v4:
> - Addressed comments from Michael
> - renamed virtblk_request_cancel() to
> virtblk_complete_request_with_ioerr()
> - Added comments for virtblk_complete_request_with_ioerr()
> - Renamed virtblk_broken_device_cleanup() to
> virtblk_cleanup_broken_device()
> - Added comments for virtblk_cleanup_broken_device()
> - Moved the broken vq check in virtblk_remove()
> - Fixed comment style to have first empty line
> - replaced freezed to frozen
> - Fixed comments rephrased
>
> v2->v3:
> - Addressed comments from Michael
> - updated comment for synchronizing with callbacks
>
> v1->v2:
> - Addressed comments from Stephan
> - fixed spelling to 'waiting'
> - Addressed comments from Michael
> - Dropped checking broken vq from queue_rq() and queue_rqs()
> because it is checked in lower layer routines in virtio core
>
> v0->v1:
> - Fixed comments from Stefan to rename a cleanup function
> - Improved logic for handling any outstanding requests
> in bio layer
> - improved cancel callback to sync with ongoing done()
> ---
> drivers/block/virtio_blk.c | 95 ++++++++++++++++++++++++++++++++++++++
> 1 file changed, 95 insertions(+)
>
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index 7cffea01d868..c5e383c0ac48 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -1554,6 +1554,98 @@ static int virtblk_probe(struct virtio_device *vdev)
> return err;
> }
>
> +/*
> + * If the vq is broken, device will not complete requests.
> + * So we do it for the device.
> + */
> +static bool virtblk_complete_request_with_ioerr(struct request *rq, void *data)
> +{
> + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
> + struct virtio_blk *vblk = data;
> + struct virtio_blk_vq *vq;
> + unsigned long flags;
> +
> + vq = &vblk->vqs[rq->mq_hctx->queue_num];
> +
> + spin_lock_irqsave(&vq->lock, flags);
> +
> + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
> + if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq))
> + blk_mq_complete_request(rq);
> +
> + spin_unlock_irqrestore(&vq->lock, flags);
> + return true;
> +}
> +
> +/*
> + * If the device is broken, it will not use any buffers and waiting
> + * for that to happen is pointless. We'll do the cleanup in the driver,
> + * completing all requests for the device.
> + */
> +static void virtblk_cleanup_broken_device(struct virtio_blk *vblk)
> +{
> + struct request_queue *q = vblk->disk->queue;
> +
> + /*
> + * Start freezing the queue, so that new requests keeps waiting at the
> + * door of bio_queue_enter(). We cannot fully freeze the queue because
> + * frozen queue is an empty queue and there are pending requests, so
> + * only start freezing it.
> + */
> + blk_freeze_queue_start(q);
> +
> + /*
> + * When quiescing completes, all ongoing dispatches have completed
> + * and no new dispatch will happen towards the driver.
> + */
> + blk_mq_quiesce_queue(q);
> +
> + /*
> + * Synchronize with any ongoing VQ callbacks that may have started
> + * before the VQs were marked as broken. Any outstanding requests
> + * will be completed by virtblk_complete_request_with_ioerr().
> + */
> + virtio_synchronize_cbs(vblk->vdev);
> +
> + /*
> + * At this point, no new requests can enter the queue_rq() and
> + * completion routine will not complete any new requests either for the
> + * broken vq. Hence, it is safe to cancel all requests which are
> + * started.
> + */
> + blk_mq_tagset_busy_iter(&vblk->tag_set,
> + virtblk_complete_request_with_ioerr, vblk);
> + blk_mq_tagset_wait_completed_request(&vblk->tag_set);
> +
> + /*
> + * All pending requests are cleaned up. Time to resume so that disk
> + * deletion can be smooth. Start the HW queues so that when queue is
> + * unquiesced requests can again enter the driver.
> + */
> + blk_mq_start_stopped_hw_queues(q, true);
> +
> + /*
> + * Unquiescing will trigger dispatching any pending requests to the
> + * driver which has crossed bio_queue_enter() to the driver.
> + */
> + blk_mq_unquiesce_queue(q);
> +
> + /*
> + * Wait for all pending dispatches to terminate which may have been
> + * initiated after unquiescing.
> + */
> + blk_mq_freeze_queue_wait(q);
> +
> + /*
> + * Mark the disk dead so that once we unfreeze the queue, requests
> + * waiting at the door of bio_queue_enter() can be aborted right away.
> + */
> + blk_mark_disk_dead(vblk->disk);
> +
> + /* Unfreeze the queue so that any waiting requests will be aborted. */
> + blk_mq_unfreeze_queue_nomemrestore(q);
> +}
> +
> static void virtblk_remove(struct virtio_device *vdev)
> {
> struct virtio_blk *vblk = vdev->priv;
> @@ -1561,6 +1653,9 @@ static void virtblk_remove(struct virtio_device *vdev)
> /* Make sure no work handler is accessing the device. */
> flush_work(&vblk->config_work);
>
> + if (virtqueue_is_broken(vblk->vqs[0].vq))
> + virtblk_cleanup_broken_device(vblk);
> +
> del_gendisk(vblk->disk);
> blk_mq_free_tag_set(&vblk->tag_set);
>
> --
> 2.34.1
>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal
2025-06-24 18:56 ` Stefan Hajnoczi
@ 2025-06-24 19:01 ` Parav Pandit
2025-06-24 19:06 ` Michael S. Tsirkin
2025-06-24 19:06 ` Michael S. Tsirkin
2025-06-25 12:39 ` Michael S. Tsirkin
2 siblings, 1 reply; 24+ messages in thread
From: Parav Pandit @ 2025-06-24 19:01 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: mst@redhat.com, axboe@kernel.dk, virtualization@lists.linux.dev,
linux-block@vger.kernel.org, stable@vger.kernel.org,
NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni,
xuanzhuo@linux.alibaba.com, pbonzini@redhat.com,
jasowang@redhat.com, alok.a.tiwari@oracle.com, Max Gurtovoy,
Israel Rukshin
> From: Stefan Hajnoczi <stefanha@redhat.com>
> Sent: 25 June 2025 12:26 AM
>
> On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit wrote:
> > When the PCI device is surprise removed, requests may not complete the
> > device as the VQ is marked as broken. Due to this, the disk deletion
> > hangs.
>
> There are loops in the core virtio driver code that expect device register reads
> to eventually return 0:
> drivers/virtio/virtio_pci_modern.c:vp_reset()
> drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_queue_reset()
>
> Is there a hang if these loops are hit when a device has been surprise
> removed? I'm trying to understand whether surprise removal is fully
> supported or whether this patch is one step in that direction.
>
In one of the previous replies I answered to Michael, but don't have the link handy.
It is not fully supported by this patch. It will hang.
This patch restores driver back to the same state what it was before the fixes tag patch.
The virtio stack level work is needed to support surprise removal, including the reset flow you rightly pointed.
> Apart from that, I'm happy with the virtio_blk.c aspects of the patch:
> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
>
Thanks.
> >
> > Fix it by aborting the requests when the VQ is broken.
> >
> > With this fix now fio completes swiftly.
> > An alternative of IO timeout has been considered, however when the
> > driver knows about unresponsive block device, swiftly clearing them
> > enables users and upper layers to react quickly.
> >
> > Verified with multiple device unplug iterations with pending requests
> > in virtio used ring and some pending with the device.
> >
> > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio
> > pci device")
> > Cc: stable@vger.kernel.org
> > Reported-by: Li RongQing <lirongqing@baidu.com>
> > Closes:
> > https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b474
> > 1@baidu.com/
> > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> > Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> > Signed-off-by: Parav Pandit <parav@nvidia.com>
> >
> > ---
> > v4->v5:
> > - fixed comment style where comment to start with one empty line at
> > start
> > - Addressed comments from Alok
> > - fixed typo in broken vq check
> > v3->v4:
> > - Addressed comments from Michael
> > - renamed virtblk_request_cancel() to
> > virtblk_complete_request_with_ioerr()
> > - Added comments for virtblk_complete_request_with_ioerr()
> > - Renamed virtblk_broken_device_cleanup() to
> > virtblk_cleanup_broken_device()
> > - Added comments for virtblk_cleanup_broken_device()
> > - Moved the broken vq check in virtblk_remove()
> > - Fixed comment style to have first empty line
> > - replaced freezed to frozen
> > - Fixed comments rephrased
> >
> > v2->v3:
> > - Addressed comments from Michael
> > - updated comment for synchronizing with callbacks
> >
> > v1->v2:
> > - Addressed comments from Stephan
> > - fixed spelling to 'waiting'
> > - Addressed comments from Michael
> > - Dropped checking broken vq from queue_rq() and queue_rqs()
> > because it is checked in lower layer routines in virtio core
> >
> > v0->v1:
> > - Fixed comments from Stefan to rename a cleanup function
> > - Improved logic for handling any outstanding requests
> > in bio layer
> > - improved cancel callback to sync with ongoing done()
> > ---
> > drivers/block/virtio_blk.c | 95
> > ++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 95 insertions(+)
> >
> > diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> > index 7cffea01d868..c5e383c0ac48 100644
> > --- a/drivers/block/virtio_blk.c
> > +++ b/drivers/block/virtio_blk.c
> > @@ -1554,6 +1554,98 @@ static int virtblk_probe(struct virtio_device
> *vdev)
> > return err;
> > }
> >
> > +/*
> > + * If the vq is broken, device will not complete requests.
> > + * So we do it for the device.
> > + */
> > +static bool virtblk_complete_request_with_ioerr(struct request *rq,
> > +void *data) {
> > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
> > + struct virtio_blk *vblk = data;
> > + struct virtio_blk_vq *vq;
> > + unsigned long flags;
> > +
> > + vq = &vblk->vqs[rq->mq_hctx->queue_num];
> > +
> > + spin_lock_irqsave(&vq->lock, flags);
> > +
> > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
> > + if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq))
> > + blk_mq_complete_request(rq);
> > +
> > + spin_unlock_irqrestore(&vq->lock, flags);
> > + return true;
> > +}
> > +
> > +/*
> > + * If the device is broken, it will not use any buffers and waiting
> > + * for that to happen is pointless. We'll do the cleanup in the
> > +driver,
> > + * completing all requests for the device.
> > + */
> > +static void virtblk_cleanup_broken_device(struct virtio_blk *vblk) {
> > + struct request_queue *q = vblk->disk->queue;
> > +
> > + /*
> > + * Start freezing the queue, so that new requests keeps waiting at the
> > + * door of bio_queue_enter(). We cannot fully freeze the queue
> because
> > + * frozen queue is an empty queue and there are pending requests, so
> > + * only start freezing it.
> > + */
> > + blk_freeze_queue_start(q);
> > +
> > + /*
> > + * When quiescing completes, all ongoing dispatches have completed
> > + * and no new dispatch will happen towards the driver.
> > + */
> > + blk_mq_quiesce_queue(q);
> > +
> > + /*
> > + * Synchronize with any ongoing VQ callbacks that may have started
> > + * before the VQs were marked as broken. Any outstanding requests
> > + * will be completed by virtblk_complete_request_with_ioerr().
> > + */
> > + virtio_synchronize_cbs(vblk->vdev);
> > +
> > + /*
> > + * At this point, no new requests can enter the queue_rq() and
> > + * completion routine will not complete any new requests either for
> the
> > + * broken vq. Hence, it is safe to cancel all requests which are
> > + * started.
> > + */
> > + blk_mq_tagset_busy_iter(&vblk->tag_set,
> > + virtblk_complete_request_with_ioerr, vblk);
> > + blk_mq_tagset_wait_completed_request(&vblk->tag_set);
> > +
> > + /*
> > + * All pending requests are cleaned up. Time to resume so that disk
> > + * deletion can be smooth. Start the HW queues so that when queue
> is
> > + * unquiesced requests can again enter the driver.
> > + */
> > + blk_mq_start_stopped_hw_queues(q, true);
> > +
> > + /*
> > + * Unquiescing will trigger dispatching any pending requests to the
> > + * driver which has crossed bio_queue_enter() to the driver.
> > + */
> > + blk_mq_unquiesce_queue(q);
> > +
> > + /*
> > + * Wait for all pending dispatches to terminate which may have been
> > + * initiated after unquiescing.
> > + */
> > + blk_mq_freeze_queue_wait(q);
> > +
> > + /*
> > + * Mark the disk dead so that once we unfreeze the queue, requests
> > + * waiting at the door of bio_queue_enter() can be aborted right
> away.
> > + */
> > + blk_mark_disk_dead(vblk->disk);
> > +
> > + /* Unfreeze the queue so that any waiting requests will be aborted. */
> > + blk_mq_unfreeze_queue_nomemrestore(q);
> > +}
> > +
> > static void virtblk_remove(struct virtio_device *vdev) {
> > struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1653,9 @@ static
> > void virtblk_remove(struct virtio_device *vdev)
> > /* Make sure no work handler is accessing the device. */
> > flush_work(&vblk->config_work);
> >
> > + if (virtqueue_is_broken(vblk->vqs[0].vq))
> > + virtblk_cleanup_broken_device(vblk);
> > +
> > del_gendisk(vblk->disk);
> > blk_mq_free_tag_set(&vblk->tag_set);
> >
> > --
> > 2.34.1
> >
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal
2025-06-24 18:56 ` Stefan Hajnoczi
2025-06-24 19:01 ` Parav Pandit
@ 2025-06-24 19:06 ` Michael S. Tsirkin
2025-06-25 12:39 ` Michael S. Tsirkin
2 siblings, 0 replies; 24+ messages in thread
From: Michael S. Tsirkin @ 2025-06-24 19:06 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: Parav Pandit, axboe, virtualization, linux-block, stable,
lirongqing, kch, xuanzhuo, pbonzini, jasowang, alok.a.tiwari,
Max Gurtovoy, Israel Rukshin
On Tue, Jun 24, 2025 at 02:56:22PM -0400, Stefan Hajnoczi wrote:
> On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit wrote:
> > When the PCI device is surprise removed, requests may not complete
> > the device as the VQ is marked as broken. Due to this, the disk
> > deletion hangs.
>
> There are loops in the core virtio driver code that expect device
> register reads to eventually return 0:
> drivers/virtio/virtio_pci_modern.c:vp_reset()
> drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_queue_reset()
>
> Is there a hang if these loops are hit when a device has been surprise
> removed?
Probably, yes. We can't really check broken status there though
I think - reset is called with a broken device normally.
> I'm trying to understand whether surprise removal is fully
> supported or whether this patch is one step in that direction.
I think it's a step.
> Apart from that, I'm happy with the virtio_blk.c aspects of the patch:
> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
>
> >
> > Fix it by aborting the requests when the VQ is broken.
> >
> > With this fix now fio completes swiftly.
> > An alternative of IO timeout has been considered, however
> > when the driver knows about unresponsive block device, swiftly clearing
> > them enables users and upper layers to react quickly.
> >
> > Verified with multiple device unplug iterations with pending requests in
> > virtio used ring and some pending with the device.
> >
> > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio pci device")
> > Cc: stable@vger.kernel.org
> > Reported-by: Li RongQing <lirongqing@baidu.com>
> > Closes: https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b4741@baidu.com/
> > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> > Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> > Signed-off-by: Parav Pandit <parav@nvidia.com>
> >
> > ---
> > v4->v5:
> > - fixed comment style where comment to start with one empty line at start
> > - Addressed comments from Alok
> > - fixed typo in broken vq check
> > v3->v4:
> > - Addressed comments from Michael
> > - renamed virtblk_request_cancel() to
> > virtblk_complete_request_with_ioerr()
> > - Added comments for virtblk_complete_request_with_ioerr()
> > - Renamed virtblk_broken_device_cleanup() to
> > virtblk_cleanup_broken_device()
> > - Added comments for virtblk_cleanup_broken_device()
> > - Moved the broken vq check in virtblk_remove()
> > - Fixed comment style to have first empty line
> > - replaced freezed to frozen
> > - Fixed comments rephrased
> >
> > v2->v3:
> > - Addressed comments from Michael
> > - updated comment for synchronizing with callbacks
> >
> > v1->v2:
> > - Addressed comments from Stephan
> > - fixed spelling to 'waiting'
> > - Addressed comments from Michael
> > - Dropped checking broken vq from queue_rq() and queue_rqs()
> > because it is checked in lower layer routines in virtio core
> >
> > v0->v1:
> > - Fixed comments from Stefan to rename a cleanup function
> > - Improved logic for handling any outstanding requests
> > in bio layer
> > - improved cancel callback to sync with ongoing done()
> > ---
> > drivers/block/virtio_blk.c | 95 ++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 95 insertions(+)
> >
> > diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> > index 7cffea01d868..c5e383c0ac48 100644
> > --- a/drivers/block/virtio_blk.c
> > +++ b/drivers/block/virtio_blk.c
> > @@ -1554,6 +1554,98 @@ static int virtblk_probe(struct virtio_device *vdev)
> > return err;
> > }
> >
> > +/*
> > + * If the vq is broken, device will not complete requests.
> > + * So we do it for the device.
> > + */
> > +static bool virtblk_complete_request_with_ioerr(struct request *rq, void *data)
> > +{
> > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
> > + struct virtio_blk *vblk = data;
> > + struct virtio_blk_vq *vq;
> > + unsigned long flags;
> > +
> > + vq = &vblk->vqs[rq->mq_hctx->queue_num];
> > +
> > + spin_lock_irqsave(&vq->lock, flags);
> > +
> > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
> > + if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq))
> > + blk_mq_complete_request(rq);
> > +
> > + spin_unlock_irqrestore(&vq->lock, flags);
> > + return true;
> > +}
> > +
> > +/*
> > + * If the device is broken, it will not use any buffers and waiting
> > + * for that to happen is pointless. We'll do the cleanup in the driver,
> > + * completing all requests for the device.
> > + */
> > +static void virtblk_cleanup_broken_device(struct virtio_blk *vblk)
> > +{
> > + struct request_queue *q = vblk->disk->queue;
> > +
> > + /*
> > + * Start freezing the queue, so that new requests keeps waiting at the
> > + * door of bio_queue_enter(). We cannot fully freeze the queue because
> > + * frozen queue is an empty queue and there are pending requests, so
> > + * only start freezing it.
> > + */
> > + blk_freeze_queue_start(q);
> > +
> > + /*
> > + * When quiescing completes, all ongoing dispatches have completed
> > + * and no new dispatch will happen towards the driver.
> > + */
> > + blk_mq_quiesce_queue(q);
> > +
> > + /*
> > + * Synchronize with any ongoing VQ callbacks that may have started
> > + * before the VQs were marked as broken. Any outstanding requests
> > + * will be completed by virtblk_complete_request_with_ioerr().
> > + */
> > + virtio_synchronize_cbs(vblk->vdev);
> > +
> > + /*
> > + * At this point, no new requests can enter the queue_rq() and
> > + * completion routine will not complete any new requests either for the
> > + * broken vq. Hence, it is safe to cancel all requests which are
> > + * started.
> > + */
> > + blk_mq_tagset_busy_iter(&vblk->tag_set,
> > + virtblk_complete_request_with_ioerr, vblk);
> > + blk_mq_tagset_wait_completed_request(&vblk->tag_set);
> > +
> > + /*
> > + * All pending requests are cleaned up. Time to resume so that disk
> > + * deletion can be smooth. Start the HW queues so that when queue is
> > + * unquiesced requests can again enter the driver.
> > + */
> > + blk_mq_start_stopped_hw_queues(q, true);
> > +
> > + /*
> > + * Unquiescing will trigger dispatching any pending requests to the
> > + * driver which has crossed bio_queue_enter() to the driver.
> > + */
> > + blk_mq_unquiesce_queue(q);
> > +
> > + /*
> > + * Wait for all pending dispatches to terminate which may have been
> > + * initiated after unquiescing.
> > + */
> > + blk_mq_freeze_queue_wait(q);
> > +
> > + /*
> > + * Mark the disk dead so that once we unfreeze the queue, requests
> > + * waiting at the door of bio_queue_enter() can be aborted right away.
> > + */
> > + blk_mark_disk_dead(vblk->disk);
> > +
> > + /* Unfreeze the queue so that any waiting requests will be aborted. */
> > + blk_mq_unfreeze_queue_nomemrestore(q);
> > +}
> > +
> > static void virtblk_remove(struct virtio_device *vdev)
> > {
> > struct virtio_blk *vblk = vdev->priv;
> > @@ -1561,6 +1653,9 @@ static void virtblk_remove(struct virtio_device *vdev)
> > /* Make sure no work handler is accessing the device. */
> > flush_work(&vblk->config_work);
> >
> > + if (virtqueue_is_broken(vblk->vqs[0].vq))
> > + virtblk_cleanup_broken_device(vblk);
> > +
> > del_gendisk(vblk->disk);
> > blk_mq_free_tag_set(&vblk->tag_set);
> >
> > --
> > 2.34.1
> >
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal
2025-06-24 19:01 ` Parav Pandit
@ 2025-06-24 19:06 ` Michael S. Tsirkin
2025-06-24 19:11 ` Parav Pandit
0 siblings, 1 reply; 24+ messages in thread
From: Michael S. Tsirkin @ 2025-06-24 19:06 UTC (permalink / raw)
To: Parav Pandit
Cc: Stefan Hajnoczi, axboe@kernel.dk, virtualization@lists.linux.dev,
linux-block@vger.kernel.org, stable@vger.kernel.org,
NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni,
xuanzhuo@linux.alibaba.com, pbonzini@redhat.com,
jasowang@redhat.com, alok.a.tiwari@oracle.com, Max Gurtovoy,
Israel Rukshin
On Tue, Jun 24, 2025 at 07:01:44PM +0000, Parav Pandit wrote:
>
>
> > From: Stefan Hajnoczi <stefanha@redhat.com>
> > Sent: 25 June 2025 12:26 AM
> >
> > On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit wrote:
> > > When the PCI device is surprise removed, requests may not complete the
> > > device as the VQ is marked as broken. Due to this, the disk deletion
> > > hangs.
> >
> > There are loops in the core virtio driver code that expect device register reads
> > to eventually return 0:
> > drivers/virtio/virtio_pci_modern.c:vp_reset()
> > drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_queue_reset()
> >
> > Is there a hang if these loops are hit when a device has been surprise
> > removed? I'm trying to understand whether surprise removal is fully
> > supported or whether this patch is one step in that direction.
> >
> In one of the previous replies I answered to Michael, but don't have the link handy.
> It is not fully supported by this patch. It will hang.
>
> This patch restores driver back to the same state what it was before the fixes tag patch.
> The virtio stack level work is needed to support surprise removal, including the reset flow you rightly pointed.
Have plans to do that?
> > Apart from that, I'm happy with the virtio_blk.c aspects of the patch:
> > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> >
> Thanks.
>
> > >
> > > Fix it by aborting the requests when the VQ is broken.
> > >
> > > With this fix now fio completes swiftly.
> > > An alternative of IO timeout has been considered, however when the
> > > driver knows about unresponsive block device, swiftly clearing them
> > > enables users and upper layers to react quickly.
> > >
> > > Verified with multiple device unplug iterations with pending requests
> > > in virtio used ring and some pending with the device.
> > >
> > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio
> > > pci device")
> > > Cc: stable@vger.kernel.org
> > > Reported-by: Li RongQing <lirongqing@baidu.com>
> > > Closes:
> > > https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b474
> > > 1@baidu.com/
> > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> > > Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> > > Signed-off-by: Parav Pandit <parav@nvidia.com>
> > >
> > > ---
> > > v4->v5:
> > > - fixed comment style where comment to start with one empty line at
> > > start
> > > - Addressed comments from Alok
> > > - fixed typo in broken vq check
> > > v3->v4:
> > > - Addressed comments from Michael
> > > - renamed virtblk_request_cancel() to
> > > virtblk_complete_request_with_ioerr()
> > > - Added comments for virtblk_complete_request_with_ioerr()
> > > - Renamed virtblk_broken_device_cleanup() to
> > > virtblk_cleanup_broken_device()
> > > - Added comments for virtblk_cleanup_broken_device()
> > > - Moved the broken vq check in virtblk_remove()
> > > - Fixed comment style to have first empty line
> > > - replaced freezed to frozen
> > > - Fixed comments rephrased
> > >
> > > v2->v3:
> > > - Addressed comments from Michael
> > > - updated comment for synchronizing with callbacks
> > >
> > > v1->v2:
> > > - Addressed comments from Stephan
> > > - fixed spelling to 'waiting'
> > > - Addressed comments from Michael
> > > - Dropped checking broken vq from queue_rq() and queue_rqs()
> > > because it is checked in lower layer routines in virtio core
> > >
> > > v0->v1:
> > > - Fixed comments from Stefan to rename a cleanup function
> > > - Improved logic for handling any outstanding requests
> > > in bio layer
> > > - improved cancel callback to sync with ongoing done()
> > > ---
> > > drivers/block/virtio_blk.c | 95
> > > ++++++++++++++++++++++++++++++++++++++
> > > 1 file changed, 95 insertions(+)
> > >
> > > diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> > > index 7cffea01d868..c5e383c0ac48 100644
> > > --- a/drivers/block/virtio_blk.c
> > > +++ b/drivers/block/virtio_blk.c
> > > @@ -1554,6 +1554,98 @@ static int virtblk_probe(struct virtio_device
> > *vdev)
> > > return err;
> > > }
> > >
> > > +/*
> > > + * If the vq is broken, device will not complete requests.
> > > + * So we do it for the device.
> > > + */
> > > +static bool virtblk_complete_request_with_ioerr(struct request *rq,
> > > +void *data) {
> > > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
> > > + struct virtio_blk *vblk = data;
> > > + struct virtio_blk_vq *vq;
> > > + unsigned long flags;
> > > +
> > > + vq = &vblk->vqs[rq->mq_hctx->queue_num];
> > > +
> > > + spin_lock_irqsave(&vq->lock, flags);
> > > +
> > > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
> > > + if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq))
> > > + blk_mq_complete_request(rq);
> > > +
> > > + spin_unlock_irqrestore(&vq->lock, flags);
> > > + return true;
> > > +}
> > > +
> > > +/*
> > > + * If the device is broken, it will not use any buffers and waiting
> > > + * for that to happen is pointless. We'll do the cleanup in the
> > > +driver,
> > > + * completing all requests for the device.
> > > + */
> > > +static void virtblk_cleanup_broken_device(struct virtio_blk *vblk) {
> > > + struct request_queue *q = vblk->disk->queue;
> > > +
> > > + /*
> > > + * Start freezing the queue, so that new requests keeps waiting at the
> > > + * door of bio_queue_enter(). We cannot fully freeze the queue
> > because
> > > + * frozen queue is an empty queue and there are pending requests, so
> > > + * only start freezing it.
> > > + */
> > > + blk_freeze_queue_start(q);
> > > +
> > > + /*
> > > + * When quiescing completes, all ongoing dispatches have completed
> > > + * and no new dispatch will happen towards the driver.
> > > + */
> > > + blk_mq_quiesce_queue(q);
> > > +
> > > + /*
> > > + * Synchronize with any ongoing VQ callbacks that may have started
> > > + * before the VQs were marked as broken. Any outstanding requests
> > > + * will be completed by virtblk_complete_request_with_ioerr().
> > > + */
> > > + virtio_synchronize_cbs(vblk->vdev);
> > > +
> > > + /*
> > > + * At this point, no new requests can enter the queue_rq() and
> > > + * completion routine will not complete any new requests either for
> > the
> > > + * broken vq. Hence, it is safe to cancel all requests which are
> > > + * started.
> > > + */
> > > + blk_mq_tagset_busy_iter(&vblk->tag_set,
> > > + virtblk_complete_request_with_ioerr, vblk);
> > > + blk_mq_tagset_wait_completed_request(&vblk->tag_set);
> > > +
> > > + /*
> > > + * All pending requests are cleaned up. Time to resume so that disk
> > > + * deletion can be smooth. Start the HW queues so that when queue
> > is
> > > + * unquiesced requests can again enter the driver.
> > > + */
> > > + blk_mq_start_stopped_hw_queues(q, true);
> > > +
> > > + /*
> > > + * Unquiescing will trigger dispatching any pending requests to the
> > > + * driver which has crossed bio_queue_enter() to the driver.
> > > + */
> > > + blk_mq_unquiesce_queue(q);
> > > +
> > > + /*
> > > + * Wait for all pending dispatches to terminate which may have been
> > > + * initiated after unquiescing.
> > > + */
> > > + blk_mq_freeze_queue_wait(q);
> > > +
> > > + /*
> > > + * Mark the disk dead so that once we unfreeze the queue, requests
> > > + * waiting at the door of bio_queue_enter() can be aborted right
> > away.
> > > + */
> > > + blk_mark_disk_dead(vblk->disk);
> > > +
> > > + /* Unfreeze the queue so that any waiting requests will be aborted. */
> > > + blk_mq_unfreeze_queue_nomemrestore(q);
> > > +}
> > > +
> > > static void virtblk_remove(struct virtio_device *vdev) {
> > > struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1653,9 @@ static
> > > void virtblk_remove(struct virtio_device *vdev)
> > > /* Make sure no work handler is accessing the device. */
> > > flush_work(&vblk->config_work);
> > >
> > > + if (virtqueue_is_broken(vblk->vqs[0].vq))
> > > + virtblk_cleanup_broken_device(vblk);
> > > +
> > > del_gendisk(vblk->disk);
> > > blk_mq_free_tag_set(&vblk->tag_set);
> > >
> > > --
> > > 2.34.1
> > >
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal
2025-06-24 19:06 ` Michael S. Tsirkin
@ 2025-06-24 19:11 ` Parav Pandit
2025-06-24 19:54 ` Michael S. Tsirkin
0 siblings, 1 reply; 24+ messages in thread
From: Parav Pandit @ 2025-06-24 19:11 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Stefan Hajnoczi, axboe@kernel.dk, virtualization@lists.linux.dev,
linux-block@vger.kernel.org, stable@vger.kernel.org,
NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni,
xuanzhuo@linux.alibaba.com, pbonzini@redhat.com,
jasowang@redhat.com, alok.a.tiwari@oracle.com, Max Gurtovoy,
Israel Rukshin
> From: Michael S. Tsirkin <mst@redhat.com>
> Sent: 25 June 2025 12:37 AM
>
> On Tue, Jun 24, 2025 at 07:01:44PM +0000, Parav Pandit wrote:
> >
> >
> > > From: Stefan Hajnoczi <stefanha@redhat.com>
> > > Sent: 25 June 2025 12:26 AM
> > >
> > > On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit wrote:
> > > > When the PCI device is surprise removed, requests may not complete
> > > > the device as the VQ is marked as broken. Due to this, the disk
> > > > deletion hangs.
> > >
> > > There are loops in the core virtio driver code that expect device
> > > register reads to eventually return 0:
> > > drivers/virtio/virtio_pci_modern.c:vp_reset()
> > > drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_queue_reset()
> > >
> > > Is there a hang if these loops are hit when a device has been
> > > surprise removed? I'm trying to understand whether surprise removal
> > > is fully supported or whether this patch is one step in that direction.
> > >
> > In one of the previous replies I answered to Michael, but don't have the link
> handy.
> > It is not fully supported by this patch. It will hang.
> >
> > This patch restores driver back to the same state what it was before the fixes
> tag patch.
> > The virtio stack level work is needed to support surprise removal, including
> the reset flow you rightly pointed.
>
> Have plans to do that?
>
Didn't give enough thoughts on it yet.
> > > Apart from that, I'm happy with the virtio_blk.c aspects of the patch:
> > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> > >
> > Thanks.
> >
> > > >
> > > > Fix it by aborting the requests when the VQ is broken.
> > > >
> > > > With this fix now fio completes swiftly.
> > > > An alternative of IO timeout has been considered, however when the
> > > > driver knows about unresponsive block device, swiftly clearing
> > > > them enables users and upper layers to react quickly.
> > > >
> > > > Verified with multiple device unplug iterations with pending
> > > > requests in virtio used ring and some pending with the device.
> > > >
> > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of
> > > > virtio pci device")
> > > > Cc: stable@vger.kernel.org
> > > > Reported-by: Li RongQing <lirongqing@baidu.com>
> > > > Closes:
> > > > https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9
> > > > b474
> > > > 1@baidu.com/
> > > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> > > > Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> > > > Signed-off-by: Parav Pandit <parav@nvidia.com>
> > > >
> > > > ---
> > > > v4->v5:
> > > > - fixed comment style where comment to start with one empty line
> > > > at start
> > > > - Addressed comments from Alok
> > > > - fixed typo in broken vq check
> > > > v3->v4:
> > > > - Addressed comments from Michael
> > > > - renamed virtblk_request_cancel() to
> > > > virtblk_complete_request_with_ioerr()
> > > > - Added comments for virtblk_complete_request_with_ioerr()
> > > > - Renamed virtblk_broken_device_cleanup() to
> > > > virtblk_cleanup_broken_device()
> > > > - Added comments for virtblk_cleanup_broken_device()
> > > > - Moved the broken vq check in virtblk_remove()
> > > > - Fixed comment style to have first empty line
> > > > - replaced freezed to frozen
> > > > - Fixed comments rephrased
> > > >
> > > > v2->v3:
> > > > - Addressed comments from Michael
> > > > - updated comment for synchronizing with callbacks
> > > >
> > > > v1->v2:
> > > > - Addressed comments from Stephan
> > > > - fixed spelling to 'waiting'
> > > > - Addressed comments from Michael
> > > > - Dropped checking broken vq from queue_rq() and queue_rqs()
> > > > because it is checked in lower layer routines in virtio core
> > > >
> > > > v0->v1:
> > > > - Fixed comments from Stefan to rename a cleanup function
> > > > - Improved logic for handling any outstanding requests
> > > > in bio layer
> > > > - improved cancel callback to sync with ongoing done()
> > > > ---
> > > > drivers/block/virtio_blk.c | 95
> > > > ++++++++++++++++++++++++++++++++++++++
> > > > 1 file changed, 95 insertions(+)
> > > >
> > > > diff --git a/drivers/block/virtio_blk.c
> > > > b/drivers/block/virtio_blk.c index 7cffea01d868..c5e383c0ac48
> > > > 100644
> > > > --- a/drivers/block/virtio_blk.c
> > > > +++ b/drivers/block/virtio_blk.c
> > > > @@ -1554,6 +1554,98 @@ static int virtblk_probe(struct
> > > > virtio_device
> > > *vdev)
> > > > return err;
> > > > }
> > > >
> > > > +/*
> > > > + * If the vq is broken, device will not complete requests.
> > > > + * So we do it for the device.
> > > > + */
> > > > +static bool virtblk_complete_request_with_ioerr(struct request
> > > > +*rq, void *data) {
> > > > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
> > > > + struct virtio_blk *vblk = data;
> > > > + struct virtio_blk_vq *vq;
> > > > + unsigned long flags;
> > > > +
> > > > + vq = &vblk->vqs[rq->mq_hctx->queue_num];
> > > > +
> > > > + spin_lock_irqsave(&vq->lock, flags);
> > > > +
> > > > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
> > > > + if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq))
> > > > + blk_mq_complete_request(rq);
> > > > +
> > > > + spin_unlock_irqrestore(&vq->lock, flags);
> > > > + return true;
> > > > +}
> > > > +
> > > > +/*
> > > > + * If the device is broken, it will not use any buffers and
> > > > +waiting
> > > > + * for that to happen is pointless. We'll do the cleanup in the
> > > > +driver,
> > > > + * completing all requests for the device.
> > > > + */
> > > > +static void virtblk_cleanup_broken_device(struct virtio_blk *vblk) {
> > > > + struct request_queue *q = vblk->disk->queue;
> > > > +
> > > > + /*
> > > > + * Start freezing the queue, so that new requests keeps waiting at the
> > > > + * door of bio_queue_enter(). We cannot fully freeze the queue
> > > because
> > > > + * frozen queue is an empty queue and there are pending requests, so
> > > > + * only start freezing it.
> > > > + */
> > > > + blk_freeze_queue_start(q);
> > > > +
> > > > + /*
> > > > + * When quiescing completes, all ongoing dispatches have completed
> > > > + * and no new dispatch will happen towards the driver.
> > > > + */
> > > > + blk_mq_quiesce_queue(q);
> > > > +
> > > > + /*
> > > > + * Synchronize with any ongoing VQ callbacks that may have started
> > > > + * before the VQs were marked as broken. Any outstanding requests
> > > > + * will be completed by virtblk_complete_request_with_ioerr().
> > > > + */
> > > > + virtio_synchronize_cbs(vblk->vdev);
> > > > +
> > > > + /*
> > > > + * At this point, no new requests can enter the queue_rq() and
> > > > + * completion routine will not complete any new requests either
> > > > +for
> > > the
> > > > + * broken vq. Hence, it is safe to cancel all requests which are
> > > > + * started.
> > > > + */
> > > > + blk_mq_tagset_busy_iter(&vblk->tag_set,
> > > > + virtblk_complete_request_with_ioerr, vblk);
> > > > + blk_mq_tagset_wait_completed_request(&vblk->tag_set);
> > > > +
> > > > + /*
> > > > + * All pending requests are cleaned up. Time to resume so that disk
> > > > + * deletion can be smooth. Start the HW queues so that when
> > > > +queue
> > > is
> > > > + * unquiesced requests can again enter the driver.
> > > > + */
> > > > + blk_mq_start_stopped_hw_queues(q, true);
> > > > +
> > > > + /*
> > > > + * Unquiescing will trigger dispatching any pending requests to the
> > > > + * driver which has crossed bio_queue_enter() to the driver.
> > > > + */
> > > > + blk_mq_unquiesce_queue(q);
> > > > +
> > > > + /*
> > > > + * Wait for all pending dispatches to terminate which may have been
> > > > + * initiated after unquiescing.
> > > > + */
> > > > + blk_mq_freeze_queue_wait(q);
> > > > +
> > > > + /*
> > > > + * Mark the disk dead so that once we unfreeze the queue, requests
> > > > + * waiting at the door of bio_queue_enter() can be aborted right
> > > away.
> > > > + */
> > > > + blk_mark_disk_dead(vblk->disk);
> > > > +
> > > > + /* Unfreeze the queue so that any waiting requests will be aborted. */
> > > > + blk_mq_unfreeze_queue_nomemrestore(q);
> > > > +}
> > > > +
> > > > static void virtblk_remove(struct virtio_device *vdev) {
> > > > struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1653,9 @@
> > > > static void virtblk_remove(struct virtio_device *vdev)
> > > > /* Make sure no work handler is accessing the device. */
> > > > flush_work(&vblk->config_work);
> > > >
> > > > + if (virtqueue_is_broken(vblk->vqs[0].vq))
> > > > + virtblk_cleanup_broken_device(vblk);
> > > > +
> > > > del_gendisk(vblk->disk);
> > > > blk_mq_free_tag_set(&vblk->tag_set);
> > > >
> > > > --
> > > > 2.34.1
> > > >
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal
2025-06-24 19:11 ` Parav Pandit
@ 2025-06-24 19:54 ` Michael S. Tsirkin
2025-06-25 2:55 ` Parav Pandit
0 siblings, 1 reply; 24+ messages in thread
From: Michael S. Tsirkin @ 2025-06-24 19:54 UTC (permalink / raw)
To: Parav Pandit
Cc: Stefan Hajnoczi, axboe@kernel.dk, virtualization@lists.linux.dev,
linux-block@vger.kernel.org, stable@vger.kernel.org,
NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni,
xuanzhuo@linux.alibaba.com, pbonzini@redhat.com,
jasowang@redhat.com, alok.a.tiwari@oracle.com, Max Gurtovoy,
Israel Rukshin
On Tue, Jun 24, 2025 at 07:11:29PM +0000, Parav Pandit wrote:
>
>
> > From: Michael S. Tsirkin <mst@redhat.com>
> > Sent: 25 June 2025 12:37 AM
> >
> > On Tue, Jun 24, 2025 at 07:01:44PM +0000, Parav Pandit wrote:
> > >
> > >
> > > > From: Stefan Hajnoczi <stefanha@redhat.com>
> > > > Sent: 25 June 2025 12:26 AM
> > > >
> > > > On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit wrote:
> > > > > When the PCI device is surprise removed, requests may not complete
> > > > > the device as the VQ is marked as broken. Due to this, the disk
> > > > > deletion hangs.
> > > >
> > > > There are loops in the core virtio driver code that expect device
> > > > register reads to eventually return 0:
> > > > drivers/virtio/virtio_pci_modern.c:vp_reset()
> > > > drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_queue_reset()
> > > >
> > > > Is there a hang if these loops are hit when a device has been
> > > > surprise removed? I'm trying to understand whether surprise removal
> > > > is fully supported or whether this patch is one step in that direction.
> > > >
> > > In one of the previous replies I answered to Michael, but don't have the link
> > handy.
> > > It is not fully supported by this patch. It will hang.
> > >
> > > This patch restores driver back to the same state what it was before the fixes
> > tag patch.
> > > The virtio stack level work is needed to support surprise removal, including
> > the reset flow you rightly pointed.
> >
> > Have plans to do that?
> >
> Didn't give enough thoughts on it yet.
This one is kind of pointless then? It just fixes the specific race window
that your test harness happens to hit?
Maybe it's better to wait until someone does a comprehensive fix..
> > > > Apart from that, I'm happy with the virtio_blk.c aspects of the patch:
> > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> > > >
> > > Thanks.
> > >
> > > > >
> > > > > Fix it by aborting the requests when the VQ is broken.
> > > > >
> > > > > With this fix now fio completes swiftly.
> > > > > An alternative of IO timeout has been considered, however when the
> > > > > driver knows about unresponsive block device, swiftly clearing
> > > > > them enables users and upper layers to react quickly.
> > > > >
> > > > > Verified with multiple device unplug iterations with pending
> > > > > requests in virtio used ring and some pending with the device.
> > > > >
> > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of
> > > > > virtio pci device")
> > > > > Cc: stable@vger.kernel.org
> > > > > Reported-by: Li RongQing <lirongqing@baidu.com>
> > > > > Closes:
> > > > > https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9
> > > > > b474
> > > > > 1@baidu.com/
> > > > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> > > > > Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> > > > > Signed-off-by: Parav Pandit <parav@nvidia.com>
> > > > >
> > > > > ---
> > > > > v4->v5:
> > > > > - fixed comment style where comment to start with one empty line
> > > > > at start
> > > > > - Addressed comments from Alok
> > > > > - fixed typo in broken vq check
> > > > > v3->v4:
> > > > > - Addressed comments from Michael
> > > > > - renamed virtblk_request_cancel() to
> > > > > virtblk_complete_request_with_ioerr()
> > > > > - Added comments for virtblk_complete_request_with_ioerr()
> > > > > - Renamed virtblk_broken_device_cleanup() to
> > > > > virtblk_cleanup_broken_device()
> > > > > - Added comments for virtblk_cleanup_broken_device()
> > > > > - Moved the broken vq check in virtblk_remove()
> > > > > - Fixed comment style to have first empty line
> > > > > - replaced freezed to frozen
> > > > > - Fixed comments rephrased
> > > > >
> > > > > v2->v3:
> > > > > - Addressed comments from Michael
> > > > > - updated comment for synchronizing with callbacks
> > > > >
> > > > > v1->v2:
> > > > > - Addressed comments from Stephan
> > > > > - fixed spelling to 'waiting'
> > > > > - Addressed comments from Michael
> > > > > - Dropped checking broken vq from queue_rq() and queue_rqs()
> > > > > because it is checked in lower layer routines in virtio core
> > > > >
> > > > > v0->v1:
> > > > > - Fixed comments from Stefan to rename a cleanup function
> > > > > - Improved logic for handling any outstanding requests
> > > > > in bio layer
> > > > > - improved cancel callback to sync with ongoing done()
> > > > > ---
> > > > > drivers/block/virtio_blk.c | 95
> > > > > ++++++++++++++++++++++++++++++++++++++
> > > > > 1 file changed, 95 insertions(+)
> > > > >
> > > > > diff --git a/drivers/block/virtio_blk.c
> > > > > b/drivers/block/virtio_blk.c index 7cffea01d868..c5e383c0ac48
> > > > > 100644
> > > > > --- a/drivers/block/virtio_blk.c
> > > > > +++ b/drivers/block/virtio_blk.c
> > > > > @@ -1554,6 +1554,98 @@ static int virtblk_probe(struct
> > > > > virtio_device
> > > > *vdev)
> > > > > return err;
> > > > > }
> > > > >
> > > > > +/*
> > > > > + * If the vq is broken, device will not complete requests.
> > > > > + * So we do it for the device.
> > > > > + */
> > > > > +static bool virtblk_complete_request_with_ioerr(struct request
> > > > > +*rq, void *data) {
> > > > > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
> > > > > + struct virtio_blk *vblk = data;
> > > > > + struct virtio_blk_vq *vq;
> > > > > + unsigned long flags;
> > > > > +
> > > > > + vq = &vblk->vqs[rq->mq_hctx->queue_num];
> > > > > +
> > > > > + spin_lock_irqsave(&vq->lock, flags);
> > > > > +
> > > > > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
> > > > > + if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq))
> > > > > + blk_mq_complete_request(rq);
> > > > > +
> > > > > + spin_unlock_irqrestore(&vq->lock, flags);
> > > > > + return true;
> > > > > +}
> > > > > +
> > > > > +/*
> > > > > + * If the device is broken, it will not use any buffers and
> > > > > +waiting
> > > > > + * for that to happen is pointless. We'll do the cleanup in the
> > > > > +driver,
> > > > > + * completing all requests for the device.
> > > > > + */
> > > > > +static void virtblk_cleanup_broken_device(struct virtio_blk *vblk) {
> > > > > + struct request_queue *q = vblk->disk->queue;
> > > > > +
> > > > > + /*
> > > > > + * Start freezing the queue, so that new requests keeps waiting at the
> > > > > + * door of bio_queue_enter(). We cannot fully freeze the queue
> > > > because
> > > > > + * frozen queue is an empty queue and there are pending requests, so
> > > > > + * only start freezing it.
> > > > > + */
> > > > > + blk_freeze_queue_start(q);
> > > > > +
> > > > > + /*
> > > > > + * When quiescing completes, all ongoing dispatches have completed
> > > > > + * and no new dispatch will happen towards the driver.
> > > > > + */
> > > > > + blk_mq_quiesce_queue(q);
> > > > > +
> > > > > + /*
> > > > > + * Synchronize with any ongoing VQ callbacks that may have started
> > > > > + * before the VQs were marked as broken. Any outstanding requests
> > > > > + * will be completed by virtblk_complete_request_with_ioerr().
> > > > > + */
> > > > > + virtio_synchronize_cbs(vblk->vdev);
> > > > > +
> > > > > + /*
> > > > > + * At this point, no new requests can enter the queue_rq() and
> > > > > + * completion routine will not complete any new requests either
> > > > > +for
> > > > the
> > > > > + * broken vq. Hence, it is safe to cancel all requests which are
> > > > > + * started.
> > > > > + */
> > > > > + blk_mq_tagset_busy_iter(&vblk->tag_set,
> > > > > + virtblk_complete_request_with_ioerr, vblk);
> > > > > + blk_mq_tagset_wait_completed_request(&vblk->tag_set);
> > > > > +
> > > > > + /*
> > > > > + * All pending requests are cleaned up. Time to resume so that disk
> > > > > + * deletion can be smooth. Start the HW queues so that when
> > > > > +queue
> > > > is
> > > > > + * unquiesced requests can again enter the driver.
> > > > > + */
> > > > > + blk_mq_start_stopped_hw_queues(q, true);
> > > > > +
> > > > > + /*
> > > > > + * Unquiescing will trigger dispatching any pending requests to the
> > > > > + * driver which has crossed bio_queue_enter() to the driver.
> > > > > + */
> > > > > + blk_mq_unquiesce_queue(q);
> > > > > +
> > > > > + /*
> > > > > + * Wait for all pending dispatches to terminate which may have been
> > > > > + * initiated after unquiescing.
> > > > > + */
> > > > > + blk_mq_freeze_queue_wait(q);
> > > > > +
> > > > > + /*
> > > > > + * Mark the disk dead so that once we unfreeze the queue, requests
> > > > > + * waiting at the door of bio_queue_enter() can be aborted right
> > > > away.
> > > > > + */
> > > > > + blk_mark_disk_dead(vblk->disk);
> > > > > +
> > > > > + /* Unfreeze the queue so that any waiting requests will be aborted. */
> > > > > + blk_mq_unfreeze_queue_nomemrestore(q);
> > > > > +}
> > > > > +
> > > > > static void virtblk_remove(struct virtio_device *vdev) {
> > > > > struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1653,9 @@
> > > > > static void virtblk_remove(struct virtio_device *vdev)
> > > > > /* Make sure no work handler is accessing the device. */
> > > > > flush_work(&vblk->config_work);
> > > > >
> > > > > + if (virtqueue_is_broken(vblk->vqs[0].vq))
> > > > > + virtblk_cleanup_broken_device(vblk);
> > > > > +
> > > > > del_gendisk(vblk->disk);
> > > > > blk_mq_free_tag_set(&vblk->tag_set);
> > > > >
> > > > > --
> > > > > 2.34.1
> > > > >
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal
2025-06-24 19:54 ` Michael S. Tsirkin
@ 2025-06-25 2:55 ` Parav Pandit
2025-06-25 11:04 ` Michael S. Tsirkin
0 siblings, 1 reply; 24+ messages in thread
From: Parav Pandit @ 2025-06-25 2:55 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Stefan Hajnoczi, axboe@kernel.dk, virtualization@lists.linux.dev,
linux-block@vger.kernel.org, stable@vger.kernel.org,
NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni,
xuanzhuo@linux.alibaba.com, pbonzini@redhat.com,
jasowang@redhat.com, alok.a.tiwari@oracle.com, Max Gurtovoy,
Israel Rukshin
> From: Michael S. Tsirkin <mst@redhat.com>
> Sent: 25 June 2025 01:24 AM
>
> On Tue, Jun 24, 2025 at 07:11:29PM +0000, Parav Pandit wrote:
> >
> >
> > > From: Michael S. Tsirkin <mst@redhat.com>
> > > Sent: 25 June 2025 12:37 AM
> > >
> > > On Tue, Jun 24, 2025 at 07:01:44PM +0000, Parav Pandit wrote:
> > > >
> > > >
> > > > > From: Stefan Hajnoczi <stefanha@redhat.com>
> > > > > Sent: 25 June 2025 12:26 AM
> > > > >
> > > > > On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit wrote:
> > > > > > When the PCI device is surprise removed, requests may not
> > > > > > complete the device as the VQ is marked as broken. Due to
> > > > > > this, the disk deletion hangs.
> > > > >
> > > > > There are loops in the core virtio driver code that expect
> > > > > device register reads to eventually return 0:
> > > > > drivers/virtio/virtio_pci_modern.c:vp_reset()
> > > > > drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_queue_reset
> > > > > ()
> > > > >
> > > > > Is there a hang if these loops are hit when a device has been
> > > > > surprise removed? I'm trying to understand whether surprise
> > > > > removal is fully supported or whether this patch is one step in that
> direction.
> > > > >
> > > > In one of the previous replies I answered to Michael, but don't
> > > > have the link
> > > handy.
> > > > It is not fully supported by this patch. It will hang.
> > > >
> > > > This patch restores driver back to the same state what it was
> > > > before the fixes
> > > tag patch.
> > > > The virtio stack level work is needed to support surprise removal,
> > > > including
> > > the reset flow you rightly pointed.
> > >
> > > Have plans to do that?
> > >
> > Didn't give enough thoughts on it yet.
>
> This one is kind of pointless then? It just fixes the specific race window that
> your test harness happens to hit?
>
It was reported by Li from Baidu, whose tests failed.
I missed to tag "reported-by" in v5. I had it until v4. :(
> Maybe it's better to wait until someone does a comprehensive fix..
>
>
Oh, I was under impression is that you wanted to step forward in discussion of v4.
If you prefer a comprehensive support across layers of virtio, I suggest you should revert the cited patch in fixes tag.
Otherwise, it is in degraded state as virtio never supported surprise removal.
By reverting the cited patch (or with this fix), the requests and disk deletion will not hang.
Please let me know if I should re-send to revert the patch listed in fixes tag.
> > > > > Apart from that, I'm happy with the virtio_blk.c aspects of the patch:
> > > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> > > > >
> > > > Thanks.
> > > >
> > > > > >
> > > > > > Fix it by aborting the requests when the VQ is broken.
> > > > > >
> > > > > > With this fix now fio completes swiftly.
> > > > > > An alternative of IO timeout has been considered, however when
> > > > > > the driver knows about unresponsive block device, swiftly
> > > > > > clearing them enables users and upper layers to react quickly.
> > > > > >
> > > > > > Verified with multiple device unplug iterations with pending
> > > > > > requests in virtio used ring and some pending with the device.
> > > > > >
> > > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of
> > > > > > virtio pci device")
> > > > > > Cc: stable@vger.kernel.org
> > > > > > Reported-by: Li RongQing <lirongqing@baidu.com>
> > > > > > Closes:
> > > > > > https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb7
> > > > > > 3ca9
> > > > > > b474
> > > > > > 1@baidu.com/
> > > > > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> > > > > > Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> > > > > > Signed-off-by: Parav Pandit <parav@nvidia.com>
> > > > > >
> > > > > > ---
> > > > > > v4->v5:
> > > > > > - fixed comment style where comment to start with one empty
> > > > > > line at start
> > > > > > - Addressed comments from Alok
> > > > > > - fixed typo in broken vq check
> > > > > > v3->v4:
> > > > > > - Addressed comments from Michael
> > > > > > - renamed virtblk_request_cancel() to
> > > > > > virtblk_complete_request_with_ioerr()
> > > > > > - Added comments for virtblk_complete_request_with_ioerr()
> > > > > > - Renamed virtblk_broken_device_cleanup() to
> > > > > > virtblk_cleanup_broken_device()
> > > > > > - Added comments for virtblk_cleanup_broken_device()
> > > > > > - Moved the broken vq check in virtblk_remove()
> > > > > > - Fixed comment style to have first empty line
> > > > > > - replaced freezed to frozen
> > > > > > - Fixed comments rephrased
> > > > > >
> > > > > > v2->v3:
> > > > > > - Addressed comments from Michael
> > > > > > - updated comment for synchronizing with callbacks
> > > > > >
> > > > > > v1->v2:
> > > > > > - Addressed comments from Stephan
> > > > > > - fixed spelling to 'waiting'
> > > > > > - Addressed comments from Michael
> > > > > > - Dropped checking broken vq from queue_rq() and queue_rqs()
> > > > > > because it is checked in lower layer routines in virtio core
> > > > > >
> > > > > > v0->v1:
> > > > > > - Fixed comments from Stefan to rename a cleanup function
> > > > > > - Improved logic for handling any outstanding requests
> > > > > > in bio layer
> > > > > > - improved cancel callback to sync with ongoing done()
> > > > > > ---
> > > > > > drivers/block/virtio_blk.c | 95
> > > > > > ++++++++++++++++++++++++++++++++++++++
> > > > > > 1 file changed, 95 insertions(+)
> > > > > >
> > > > > > diff --git a/drivers/block/virtio_blk.c
> > > > > > b/drivers/block/virtio_blk.c index 7cffea01d868..c5e383c0ac48
> > > > > > 100644
> > > > > > --- a/drivers/block/virtio_blk.c
> > > > > > +++ b/drivers/block/virtio_blk.c
> > > > > > @@ -1554,6 +1554,98 @@ static int virtblk_probe(struct
> > > > > > virtio_device
> > > > > *vdev)
> > > > > > return err;
> > > > > > }
> > > > > >
> > > > > > +/*
> > > > > > + * If the vq is broken, device will not complete requests.
> > > > > > + * So we do it for the device.
> > > > > > + */
> > > > > > +static bool virtblk_complete_request_with_ioerr(struct
> > > > > > +request *rq, void *data) {
> > > > > > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
> > > > > > + struct virtio_blk *vblk = data;
> > > > > > + struct virtio_blk_vq *vq;
> > > > > > + unsigned long flags;
> > > > > > +
> > > > > > + vq = &vblk->vqs[rq->mq_hctx->queue_num];
> > > > > > +
> > > > > > + spin_lock_irqsave(&vq->lock, flags);
> > > > > > +
> > > > > > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
> > > > > > + if (blk_mq_request_started(rq) &&
> !blk_mq_request_completed(rq))
> > > > > > + blk_mq_complete_request(rq);
> > > > > > +
> > > > > > + spin_unlock_irqrestore(&vq->lock, flags);
> > > > > > + return true;
> > > > > > +}
> > > > > > +
> > > > > > +/*
> > > > > > + * If the device is broken, it will not use any buffers and
> > > > > > +waiting
> > > > > > + * for that to happen is pointless. We'll do the cleanup in
> > > > > > +the driver,
> > > > > > + * completing all requests for the device.
> > > > > > + */
> > > > > > +static void virtblk_cleanup_broken_device(struct virtio_blk *vblk) {
> > > > > > + struct request_queue *q = vblk->disk->queue;
> > > > > > +
> > > > > > + /*
> > > > > > + * Start freezing the queue, so that new requests keeps
> waiting at the
> > > > > > + * door of bio_queue_enter(). We cannot fully freeze the
> > > > > > +queue
> > > > > because
> > > > > > + * frozen queue is an empty queue and there are pending
> requests, so
> > > > > > + * only start freezing it.
> > > > > > + */
> > > > > > + blk_freeze_queue_start(q);
> > > > > > +
> > > > > > + /*
> > > > > > + * When quiescing completes, all ongoing dispatches have
> completed
> > > > > > + * and no new dispatch will happen towards the driver.
> > > > > > + */
> > > > > > + blk_mq_quiesce_queue(q);
> > > > > > +
> > > > > > + /*
> > > > > > + * Synchronize with any ongoing VQ callbacks that may have
> started
> > > > > > + * before the VQs were marked as broken. Any outstanding
> requests
> > > > > > + * will be completed by
> virtblk_complete_request_with_ioerr().
> > > > > > + */
> > > > > > + virtio_synchronize_cbs(vblk->vdev);
> > > > > > +
> > > > > > + /*
> > > > > > + * At this point, no new requests can enter the queue_rq()
> and
> > > > > > + * completion routine will not complete any new requests
> > > > > > +either for
> > > > > the
> > > > > > + * broken vq. Hence, it is safe to cancel all requests which are
> > > > > > + * started.
> > > > > > + */
> > > > > > + blk_mq_tagset_busy_iter(&vblk->tag_set,
> > > > > > + virtblk_complete_request_with_ioerr,
> vblk);
> > > > > > + blk_mq_tagset_wait_completed_request(&vblk->tag_set);
> > > > > > +
> > > > > > + /*
> > > > > > + * All pending requests are cleaned up. Time to resume so
> that disk
> > > > > > + * deletion can be smooth. Start the HW queues so that when
> > > > > > +queue
> > > > > is
> > > > > > + * unquiesced requests can again enter the driver.
> > > > > > + */
> > > > > > + blk_mq_start_stopped_hw_queues(q, true);
> > > > > > +
> > > > > > + /*
> > > > > > + * Unquiescing will trigger dispatching any pending requests
> to the
> > > > > > + * driver which has crossed bio_queue_enter() to the driver.
> > > > > > + */
> > > > > > + blk_mq_unquiesce_queue(q);
> > > > > > +
> > > > > > + /*
> > > > > > + * Wait for all pending dispatches to terminate which may
> have been
> > > > > > + * initiated after unquiescing.
> > > > > > + */
> > > > > > + blk_mq_freeze_queue_wait(q);
> > > > > > +
> > > > > > + /*
> > > > > > + * Mark the disk dead so that once we unfreeze the queue,
> requests
> > > > > > + * waiting at the door of bio_queue_enter() can be aborted
> > > > > > +right
> > > > > away.
> > > > > > + */
> > > > > > + blk_mark_disk_dead(vblk->disk);
> > > > > > +
> > > > > > + /* Unfreeze the queue so that any waiting requests will be
> aborted. */
> > > > > > + blk_mq_unfreeze_queue_nomemrestore(q);
> > > > > > +}
> > > > > > +
> > > > > > static void virtblk_remove(struct virtio_device *vdev) {
> > > > > > struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1653,9 @@
> > > > > > static void virtblk_remove(struct virtio_device *vdev)
> > > > > > /* Make sure no work handler is accessing the device. */
> > > > > > flush_work(&vblk->config_work);
> > > > > >
> > > > > > + if (virtqueue_is_broken(vblk->vqs[0].vq))
> > > > > > + virtblk_cleanup_broken_device(vblk);
> > > > > > +
> > > > > > del_gendisk(vblk->disk);
> > > > > > blk_mq_free_tag_set(&vblk->tag_set);
> > > > > >
> > > > > > --
> > > > > > 2.34.1
> > > > > >
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal
2025-06-25 2:55 ` Parav Pandit
@ 2025-06-25 11:04 ` Michael S. Tsirkin
2025-06-25 11:08 ` Parav Pandit
0 siblings, 1 reply; 24+ messages in thread
From: Michael S. Tsirkin @ 2025-06-25 11:04 UTC (permalink / raw)
To: Parav Pandit
Cc: Stefan Hajnoczi, axboe@kernel.dk, virtualization@lists.linux.dev,
linux-block@vger.kernel.org, stable@vger.kernel.org,
NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni,
xuanzhuo@linux.alibaba.com, pbonzini@redhat.com,
jasowang@redhat.com, alok.a.tiwari@oracle.com, Max Gurtovoy,
Israel Rukshin
On Wed, Jun 25, 2025 at 02:55:27AM +0000, Parav Pandit wrote:
>
> > From: Michael S. Tsirkin <mst@redhat.com>
> > Sent: 25 June 2025 01:24 AM
> >
> > On Tue, Jun 24, 2025 at 07:11:29PM +0000, Parav Pandit wrote:
> > >
> > >
> > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > Sent: 25 June 2025 12:37 AM
> > > >
> > > > On Tue, Jun 24, 2025 at 07:01:44PM +0000, Parav Pandit wrote:
> > > > >
> > > > >
> > > > > > From: Stefan Hajnoczi <stefanha@redhat.com>
> > > > > > Sent: 25 June 2025 12:26 AM
> > > > > >
> > > > > > On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit wrote:
> > > > > > > When the PCI device is surprise removed, requests may not
> > > > > > > complete the device as the VQ is marked as broken. Due to
> > > > > > > this, the disk deletion hangs.
> > > > > >
> > > > > > There are loops in the core virtio driver code that expect
> > > > > > device register reads to eventually return 0:
> > > > > > drivers/virtio/virtio_pci_modern.c:vp_reset()
> > > > > > drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_queue_reset
> > > > > > ()
> > > > > >
> > > > > > Is there a hang if these loops are hit when a device has been
> > > > > > surprise removed? I'm trying to understand whether surprise
> > > > > > removal is fully supported or whether this patch is one step in that
> > direction.
> > > > > >
> > > > > In one of the previous replies I answered to Michael, but don't
> > > > > have the link
> > > > handy.
> > > > > It is not fully supported by this patch. It will hang.
> > > > >
> > > > > This patch restores driver back to the same state what it was
> > > > > before the fixes
> > > > tag patch.
> > > > > The virtio stack level work is needed to support surprise removal,
> > > > > including
> > > > the reset flow you rightly pointed.
> > > >
> > > > Have plans to do that?
> > > >
> > > Didn't give enough thoughts on it yet.
> >
> > This one is kind of pointless then? It just fixes the specific race window that
> > your test harness happens to hit?
> >
> It was reported by Li from Baidu, whose tests failed.
> I missed to tag "reported-by" in v5. I had it until v4. :(
>
> > Maybe it's better to wait until someone does a comprehensive fix..
> >
> >
> Oh, I was under impression is that you wanted to step forward in discussion of v4.
> If you prefer a comprehensive support across layers of virtio, I suggest you should revert the cited patch in fixes tag.
>
> Otherwise, it is in degraded state as virtio never supported surprise removal.
> By reverting the cited patch (or with this fix), the requests and disk deletion will not hang.
But they will hung in virtio core on reset, will they not? The tests
just do not happen to trigger this?
> Please let me know if I should re-send to revert the patch listed in fixes tag.
>
> > > > > > Apart from that, I'm happy with the virtio_blk.c aspects of the patch:
> > > > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> > > > > >
> > > > > Thanks.
> > > > >
> > > > > > >
> > > > > > > Fix it by aborting the requests when the VQ is broken.
> > > > > > >
> > > > > > > With this fix now fio completes swiftly.
> > > > > > > An alternative of IO timeout has been considered, however when
> > > > > > > the driver knows about unresponsive block device, swiftly
> > > > > > > clearing them enables users and upper layers to react quickly.
> > > > > > >
> > > > > > > Verified with multiple device unplug iterations with pending
> > > > > > > requests in virtio used ring and some pending with the device.
> > > > > > >
> > > > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of
> > > > > > > virtio pci device")
> > > > > > > Cc: stable@vger.kernel.org
> > > > > > > Reported-by: Li RongQing <lirongqing@baidu.com>
> > > > > > > Closes:
> > > > > > > https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb7
> > > > > > > 3ca9
> > > > > > > b474
> > > > > > > 1@baidu.com/
> > > > > > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> > > > > > > Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> > > > > > > Signed-off-by: Parav Pandit <parav@nvidia.com>
> > > > > > >
> > > > > > > ---
> > > > > > > v4->v5:
> > > > > > > - fixed comment style where comment to start with one empty
> > > > > > > line at start
> > > > > > > - Addressed comments from Alok
> > > > > > > - fixed typo in broken vq check
> > > > > > > v3->v4:
> > > > > > > - Addressed comments from Michael
> > > > > > > - renamed virtblk_request_cancel() to
> > > > > > > virtblk_complete_request_with_ioerr()
> > > > > > > - Added comments for virtblk_complete_request_with_ioerr()
> > > > > > > - Renamed virtblk_broken_device_cleanup() to
> > > > > > > virtblk_cleanup_broken_device()
> > > > > > > - Added comments for virtblk_cleanup_broken_device()
> > > > > > > - Moved the broken vq check in virtblk_remove()
> > > > > > > - Fixed comment style to have first empty line
> > > > > > > - replaced freezed to frozen
> > > > > > > - Fixed comments rephrased
> > > > > > >
> > > > > > > v2->v3:
> > > > > > > - Addressed comments from Michael
> > > > > > > - updated comment for synchronizing with callbacks
> > > > > > >
> > > > > > > v1->v2:
> > > > > > > - Addressed comments from Stephan
> > > > > > > - fixed spelling to 'waiting'
> > > > > > > - Addressed comments from Michael
> > > > > > > - Dropped checking broken vq from queue_rq() and queue_rqs()
> > > > > > > because it is checked in lower layer routines in virtio core
> > > > > > >
> > > > > > > v0->v1:
> > > > > > > - Fixed comments from Stefan to rename a cleanup function
> > > > > > > - Improved logic for handling any outstanding requests
> > > > > > > in bio layer
> > > > > > > - improved cancel callback to sync with ongoing done()
> > > > > > > ---
> > > > > > > drivers/block/virtio_blk.c | 95
> > > > > > > ++++++++++++++++++++++++++++++++++++++
> > > > > > > 1 file changed, 95 insertions(+)
> > > > > > >
> > > > > > > diff --git a/drivers/block/virtio_blk.c
> > > > > > > b/drivers/block/virtio_blk.c index 7cffea01d868..c5e383c0ac48
> > > > > > > 100644
> > > > > > > --- a/drivers/block/virtio_blk.c
> > > > > > > +++ b/drivers/block/virtio_blk.c
> > > > > > > @@ -1554,6 +1554,98 @@ static int virtblk_probe(struct
> > > > > > > virtio_device
> > > > > > *vdev)
> > > > > > > return err;
> > > > > > > }
> > > > > > >
> > > > > > > +/*
> > > > > > > + * If the vq is broken, device will not complete requests.
> > > > > > > + * So we do it for the device.
> > > > > > > + */
> > > > > > > +static bool virtblk_complete_request_with_ioerr(struct
> > > > > > > +request *rq, void *data) {
> > > > > > > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
> > > > > > > + struct virtio_blk *vblk = data;
> > > > > > > + struct virtio_blk_vq *vq;
> > > > > > > + unsigned long flags;
> > > > > > > +
> > > > > > > + vq = &vblk->vqs[rq->mq_hctx->queue_num];
> > > > > > > +
> > > > > > > + spin_lock_irqsave(&vq->lock, flags);
> > > > > > > +
> > > > > > > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
> > > > > > > + if (blk_mq_request_started(rq) &&
> > !blk_mq_request_completed(rq))
> > > > > > > + blk_mq_complete_request(rq);
> > > > > > > +
> > > > > > > + spin_unlock_irqrestore(&vq->lock, flags);
> > > > > > > + return true;
> > > > > > > +}
> > > > > > > +
> > > > > > > +/*
> > > > > > > + * If the device is broken, it will not use any buffers and
> > > > > > > +waiting
> > > > > > > + * for that to happen is pointless. We'll do the cleanup in
> > > > > > > +the driver,
> > > > > > > + * completing all requests for the device.
> > > > > > > + */
> > > > > > > +static void virtblk_cleanup_broken_device(struct virtio_blk *vblk) {
> > > > > > > + struct request_queue *q = vblk->disk->queue;
> > > > > > > +
> > > > > > > + /*
> > > > > > > + * Start freezing the queue, so that new requests keeps
> > waiting at the
> > > > > > > + * door of bio_queue_enter(). We cannot fully freeze the
> > > > > > > +queue
> > > > > > because
> > > > > > > + * frozen queue is an empty queue and there are pending
> > requests, so
> > > > > > > + * only start freezing it.
> > > > > > > + */
> > > > > > > + blk_freeze_queue_start(q);
> > > > > > > +
> > > > > > > + /*
> > > > > > > + * When quiescing completes, all ongoing dispatches have
> > completed
> > > > > > > + * and no new dispatch will happen towards the driver.
> > > > > > > + */
> > > > > > > + blk_mq_quiesce_queue(q);
> > > > > > > +
> > > > > > > + /*
> > > > > > > + * Synchronize with any ongoing VQ callbacks that may have
> > started
> > > > > > > + * before the VQs were marked as broken. Any outstanding
> > requests
> > > > > > > + * will be completed by
> > virtblk_complete_request_with_ioerr().
> > > > > > > + */
> > > > > > > + virtio_synchronize_cbs(vblk->vdev);
> > > > > > > +
> > > > > > > + /*
> > > > > > > + * At this point, no new requests can enter the queue_rq()
> > and
> > > > > > > + * completion routine will not complete any new requests
> > > > > > > +either for
> > > > > > the
> > > > > > > + * broken vq. Hence, it is safe to cancel all requests which are
> > > > > > > + * started.
> > > > > > > + */
> > > > > > > + blk_mq_tagset_busy_iter(&vblk->tag_set,
> > > > > > > + virtblk_complete_request_with_ioerr,
> > vblk);
> > > > > > > + blk_mq_tagset_wait_completed_request(&vblk->tag_set);
> > > > > > > +
> > > > > > > + /*
> > > > > > > + * All pending requests are cleaned up. Time to resume so
> > that disk
> > > > > > > + * deletion can be smooth. Start the HW queues so that when
> > > > > > > +queue
> > > > > > is
> > > > > > > + * unquiesced requests can again enter the driver.
> > > > > > > + */
> > > > > > > + blk_mq_start_stopped_hw_queues(q, true);
> > > > > > > +
> > > > > > > + /*
> > > > > > > + * Unquiescing will trigger dispatching any pending requests
> > to the
> > > > > > > + * driver which has crossed bio_queue_enter() to the driver.
> > > > > > > + */
> > > > > > > + blk_mq_unquiesce_queue(q);
> > > > > > > +
> > > > > > > + /*
> > > > > > > + * Wait for all pending dispatches to terminate which may
> > have been
> > > > > > > + * initiated after unquiescing.
> > > > > > > + */
> > > > > > > + blk_mq_freeze_queue_wait(q);
> > > > > > > +
> > > > > > > + /*
> > > > > > > + * Mark the disk dead so that once we unfreeze the queue,
> > requests
> > > > > > > + * waiting at the door of bio_queue_enter() can be aborted
> > > > > > > +right
> > > > > > away.
> > > > > > > + */
> > > > > > > + blk_mark_disk_dead(vblk->disk);
> > > > > > > +
> > > > > > > + /* Unfreeze the queue so that any waiting requests will be
> > aborted. */
> > > > > > > + blk_mq_unfreeze_queue_nomemrestore(q);
> > > > > > > +}
> > > > > > > +
> > > > > > > static void virtblk_remove(struct virtio_device *vdev) {
> > > > > > > struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1653,9 @@
> > > > > > > static void virtblk_remove(struct virtio_device *vdev)
> > > > > > > /* Make sure no work handler is accessing the device. */
> > > > > > > flush_work(&vblk->config_work);
> > > > > > >
> > > > > > > + if (virtqueue_is_broken(vblk->vqs[0].vq))
> > > > > > > + virtblk_cleanup_broken_device(vblk);
> > > > > > > +
> > > > > > > del_gendisk(vblk->disk);
> > > > > > > blk_mq_free_tag_set(&vblk->tag_set);
> > > > > > >
> > > > > > > --
> > > > > > > 2.34.1
> > > > > > >
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal
2025-06-25 11:04 ` Michael S. Tsirkin
@ 2025-06-25 11:08 ` Parav Pandit
2025-06-25 11:44 ` Michael S. Tsirkin
0 siblings, 1 reply; 24+ messages in thread
From: Parav Pandit @ 2025-06-25 11:08 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Stefan Hajnoczi, axboe@kernel.dk, virtualization@lists.linux.dev,
linux-block@vger.kernel.org, stable@vger.kernel.org,
NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni,
xuanzhuo@linux.alibaba.com, pbonzini@redhat.com,
jasowang@redhat.com, alok.a.tiwari@oracle.com, Max Gurtovoy,
Israel Rukshin
> From: Michael S. Tsirkin <mst@redhat.com>
> Sent: 25 June 2025 04:34 PM
>
> On Wed, Jun 25, 2025 at 02:55:27AM +0000, Parav Pandit wrote:
> >
> > > From: Michael S. Tsirkin <mst@redhat.com>
> > > Sent: 25 June 2025 01:24 AM
> > >
> > > On Tue, Jun 24, 2025 at 07:11:29PM +0000, Parav Pandit wrote:
> > > >
> > > >
> > > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > > Sent: 25 June 2025 12:37 AM
> > > > >
> > > > > On Tue, Jun 24, 2025 at 07:01:44PM +0000, Parav Pandit wrote:
> > > > > >
> > > > > >
> > > > > > > From: Stefan Hajnoczi <stefanha@redhat.com>
> > > > > > > Sent: 25 June 2025 12:26 AM
> > > > > > >
> > > > > > > On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit wrote:
> > > > > > > > When the PCI device is surprise removed, requests may not
> > > > > > > > complete the device as the VQ is marked as broken. Due to
> > > > > > > > this, the disk deletion hangs.
> > > > > > >
> > > > > > > There are loops in the core virtio driver code that expect
> > > > > > > device register reads to eventually return 0:
> > > > > > > drivers/virtio/virtio_pci_modern.c:vp_reset()
> > > > > > > drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_queue_r
> > > > > > > eset
> > > > > > > ()
> > > > > > >
> > > > > > > Is there a hang if these loops are hit when a device has
> > > > > > > been surprise removed? I'm trying to understand whether
> > > > > > > surprise removal is fully supported or whether this patch is
> > > > > > > one step in that
> > > direction.
> > > > > > >
> > > > > > In one of the previous replies I answered to Michael, but
> > > > > > don't have the link
> > > > > handy.
> > > > > > It is not fully supported by this patch. It will hang.
> > > > > >
> > > > > > This patch restores driver back to the same state what it was
> > > > > > before the fixes
> > > > > tag patch.
> > > > > > The virtio stack level work is needed to support surprise
> > > > > > removal, including
> > > > > the reset flow you rightly pointed.
> > > > >
> > > > > Have plans to do that?
> > > > >
> > > > Didn't give enough thoughts on it yet.
> > >
> > > This one is kind of pointless then? It just fixes the specific race
> > > window that your test harness happens to hit?
> > >
> > It was reported by Li from Baidu, whose tests failed.
> > I missed to tag "reported-by" in v5. I had it until v4. :(
> >
> > > Maybe it's better to wait until someone does a comprehensive fix..
> > >
> > >
> > Oh, I was under impression is that you wanted to step forward in discussion
> of v4.
> > If you prefer a comprehensive support across layers of virtio, I suggest you
> should revert the cited patch in fixes tag.
> >
> > Otherwise, it is in degraded state as virtio never supported surprise removal.
> > By reverting the cited patch (or with this fix), the requests and disk deletion
> will not hang.
>
> But they will hung in virtio core on reset, will they not? The tests just do not
> happen to trigger this?
>
It will hang if it a true surprise removal which no device did so far because it never worked.
(or did, but always hung that no one reported yet)
I am familiar with 2 or more PCI devices who reports surprise removal, which do not complete the requests but yet allows device reset flow.
This is because device is still there on the PCI bus. Only via side band signals device removal was reported.
But I agree that for full support, virtio all layer changes would be needed as new functionality (without fixes tag :) ).
> > Please let me know if I should re-send to revert the patch listed in fixes tag.
> >
> > > > > > > Apart from that, I'm happy with the virtio_blk.c aspects of the
> patch:
> > > > > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> > > > > > >
> > > > > > Thanks.
> > > > > >
> > > > > > > >
> > > > > > > > Fix it by aborting the requests when the VQ is broken.
> > > > > > > >
> > > > > > > > With this fix now fio completes swiftly.
> > > > > > > > An alternative of IO timeout has been considered, however
> > > > > > > > when the driver knows about unresponsive block device,
> > > > > > > > swiftly clearing them enables users and upper layers to react
> quickly.
> > > > > > > >
> > > > > > > > Verified with multiple device unplug iterations with
> > > > > > > > pending requests in virtio used ring and some pending with the
> device.
> > > > > > > >
> > > > > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal
> > > > > > > > of virtio pci device")
> > > > > > > > Cc: stable@vger.kernel.org
> > > > > > > > Reported-by: Li RongQing <lirongqing@baidu.com>
> > > > > > > > Closes:
> > > > > > > > https://lore.kernel.org/virtualization/c45dd68698cd47238c5
> > > > > > > > 5fb7
> > > > > > > > 3ca9
> > > > > > > > b474
> > > > > > > > 1@baidu.com/
> > > > > > > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> > > > > > > > Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> > > > > > > > Signed-off-by: Parav Pandit <parav@nvidia.com>
> > > > > > > >
> > > > > > > > ---
> > > > > > > > v4->v5:
> > > > > > > > - fixed comment style where comment to start with one
> > > > > > > > empty line at start
> > > > > > > > - Addressed comments from Alok
> > > > > > > > - fixed typo in broken vq check
> > > > > > > > v3->v4:
> > > > > > > > - Addressed comments from Michael
> > > > > > > > - renamed virtblk_request_cancel() to
> > > > > > > > virtblk_complete_request_with_ioerr()
> > > > > > > > - Added comments for virtblk_complete_request_with_ioerr()
> > > > > > > > - Renamed virtblk_broken_device_cleanup() to
> > > > > > > > virtblk_cleanup_broken_device()
> > > > > > > > - Added comments for virtblk_cleanup_broken_device()
> > > > > > > > - Moved the broken vq check in virtblk_remove()
> > > > > > > > - Fixed comment style to have first empty line
> > > > > > > > - replaced freezed to frozen
> > > > > > > > - Fixed comments rephrased
> > > > > > > >
> > > > > > > > v2->v3:
> > > > > > > > - Addressed comments from Michael
> > > > > > > > - updated comment for synchronizing with callbacks
> > > > > > > >
> > > > > > > > v1->v2:
> > > > > > > > - Addressed comments from Stephan
> > > > > > > > - fixed spelling to 'waiting'
> > > > > > > > - Addressed comments from Michael
> > > > > > > > - Dropped checking broken vq from queue_rq() and queue_rqs()
> > > > > > > > because it is checked in lower layer routines in virtio
> > > > > > > > core
> > > > > > > >
> > > > > > > > v0->v1:
> > > > > > > > - Fixed comments from Stefan to rename a cleanup function
> > > > > > > > - Improved logic for handling any outstanding requests
> > > > > > > > in bio layer
> > > > > > > > - improved cancel callback to sync with ongoing done()
> > > > > > > > ---
> > > > > > > > drivers/block/virtio_blk.c | 95
> > > > > > > > ++++++++++++++++++++++++++++++++++++++
> > > > > > > > 1 file changed, 95 insertions(+)
> > > > > > > >
> > > > > > > > diff --git a/drivers/block/virtio_blk.c
> > > > > > > > b/drivers/block/virtio_blk.c index
> > > > > > > > 7cffea01d868..c5e383c0ac48
> > > > > > > > 100644
> > > > > > > > --- a/drivers/block/virtio_blk.c
> > > > > > > > +++ b/drivers/block/virtio_blk.c
> > > > > > > > @@ -1554,6 +1554,98 @@ static int virtblk_probe(struct
> > > > > > > > virtio_device
> > > > > > > *vdev)
> > > > > > > > return err;
> > > > > > > > }
> > > > > > > >
> > > > > > > > +/*
> > > > > > > > + * If the vq is broken, device will not complete requests.
> > > > > > > > + * So we do it for the device.
> > > > > > > > + */
> > > > > > > > +static bool virtblk_complete_request_with_ioerr(struct
> > > > > > > > +request *rq, void *data) {
> > > > > > > > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
> > > > > > > > + struct virtio_blk *vblk = data;
> > > > > > > > + struct virtio_blk_vq *vq;
> > > > > > > > + unsigned long flags;
> > > > > > > > +
> > > > > > > > + vq = &vblk->vqs[rq->mq_hctx->queue_num];
> > > > > > > > +
> > > > > > > > + spin_lock_irqsave(&vq->lock, flags);
> > > > > > > > +
> > > > > > > > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
> > > > > > > > + if (blk_mq_request_started(rq) &&
> > > !blk_mq_request_completed(rq))
> > > > > > > > + blk_mq_complete_request(rq);
> > > > > > > > +
> > > > > > > > + spin_unlock_irqrestore(&vq->lock, flags);
> > > > > > > > + return true;
> > > > > > > > +}
> > > > > > > > +
> > > > > > > > +/*
> > > > > > > > + * If the device is broken, it will not use any buffers
> > > > > > > > +and waiting
> > > > > > > > + * for that to happen is pointless. We'll do the cleanup
> > > > > > > > +in the driver,
> > > > > > > > + * completing all requests for the device.
> > > > > > > > + */
> > > > > > > > +static void virtblk_cleanup_broken_device(struct virtio_blk *vblk)
> {
> > > > > > > > + struct request_queue *q = vblk->disk->queue;
> > > > > > > > +
> > > > > > > > + /*
> > > > > > > > + * Start freezing the queue, so that new requests keeps
> > > waiting at the
> > > > > > > > + * door of bio_queue_enter(). We cannot fully freeze the
> > > > > > > > +queue
> > > > > > > because
> > > > > > > > + * frozen queue is an empty queue and there are pending
> > > requests, so
> > > > > > > > + * only start freezing it.
> > > > > > > > + */
> > > > > > > > + blk_freeze_queue_start(q);
> > > > > > > > +
> > > > > > > > + /*
> > > > > > > > + * When quiescing completes, all ongoing dispatches have
> > > completed
> > > > > > > > + * and no new dispatch will happen towards the driver.
> > > > > > > > + */
> > > > > > > > + blk_mq_quiesce_queue(q);
> > > > > > > > +
> > > > > > > > + /*
> > > > > > > > + * Synchronize with any ongoing VQ callbacks that may
> > > > > > > > +have
> > > started
> > > > > > > > + * before the VQs were marked as broken. Any outstanding
> > > requests
> > > > > > > > + * will be completed by
> > > virtblk_complete_request_with_ioerr().
> > > > > > > > + */
> > > > > > > > + virtio_synchronize_cbs(vblk->vdev);
> > > > > > > > +
> > > > > > > > + /*
> > > > > > > > + * At this point, no new requests can enter the
> > > > > > > > +queue_rq()
> > > and
> > > > > > > > + * completion routine will not complete any new requests
> > > > > > > > +either for
> > > > > > > the
> > > > > > > > + * broken vq. Hence, it is safe to cancel all requests which are
> > > > > > > > + * started.
> > > > > > > > + */
> > > > > > > > + blk_mq_tagset_busy_iter(&vblk->tag_set,
> > > > > > > > + virtblk_complete_request_with_ioerr,
> > > vblk);
> > > > > > > > + blk_mq_tagset_wait_completed_request(&vblk->tag_set);
> > > > > > > > +
> > > > > > > > + /*
> > > > > > > > + * All pending requests are cleaned up. Time to resume
> > > > > > > > +so
> > > that disk
> > > > > > > > + * deletion can be smooth. Start the HW queues so that
> > > > > > > > +when queue
> > > > > > > is
> > > > > > > > + * unquiesced requests can again enter the driver.
> > > > > > > > + */
> > > > > > > > + blk_mq_start_stopped_hw_queues(q, true);
> > > > > > > > +
> > > > > > > > + /*
> > > > > > > > + * Unquiescing will trigger dispatching any pending
> > > > > > > > +requests
> > > to the
> > > > > > > > + * driver which has crossed bio_queue_enter() to the driver.
> > > > > > > > + */
> > > > > > > > + blk_mq_unquiesce_queue(q);
> > > > > > > > +
> > > > > > > > + /*
> > > > > > > > + * Wait for all pending dispatches to terminate which
> > > > > > > > +may
> > > have been
> > > > > > > > + * initiated after unquiescing.
> > > > > > > > + */
> > > > > > > > + blk_mq_freeze_queue_wait(q);
> > > > > > > > +
> > > > > > > > + /*
> > > > > > > > + * Mark the disk dead so that once we unfreeze the
> > > > > > > > +queue,
> > > requests
> > > > > > > > + * waiting at the door of bio_queue_enter() can be
> > > > > > > > +aborted right
> > > > > > > away.
> > > > > > > > + */
> > > > > > > > + blk_mark_disk_dead(vblk->disk);
> > > > > > > > +
> > > > > > > > + /* Unfreeze the queue so that any waiting requests will
> > > > > > > > +be
> > > aborted. */
> > > > > > > > + blk_mq_unfreeze_queue_nomemrestore(q);
> > > > > > > > +}
> > > > > > > > +
> > > > > > > > static void virtblk_remove(struct virtio_device *vdev) {
> > > > > > > > struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1653,9
> > > > > > > > @@ static void virtblk_remove(struct virtio_device *vdev)
> > > > > > > > /* Make sure no work handler is accessing the device. */
> > > > > > > > flush_work(&vblk->config_work);
> > > > > > > >
> > > > > > > > + if (virtqueue_is_broken(vblk->vqs[0].vq))
> > > > > > > > + virtblk_cleanup_broken_device(vblk);
> > > > > > > > +
> > > > > > > > del_gendisk(vblk->disk);
> > > > > > > > blk_mq_free_tag_set(&vblk->tag_set);
> > > > > > > >
> > > > > > > > --
> > > > > > > > 2.34.1
> > > > > > > >
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal
2025-06-25 11:08 ` Parav Pandit
@ 2025-06-25 11:44 ` Michael S. Tsirkin
2025-06-25 19:08 ` Parav Pandit
0 siblings, 1 reply; 24+ messages in thread
From: Michael S. Tsirkin @ 2025-06-25 11:44 UTC (permalink / raw)
To: Parav Pandit
Cc: Stefan Hajnoczi, axboe@kernel.dk, virtualization@lists.linux.dev,
linux-block@vger.kernel.org, stable@vger.kernel.org,
NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni,
xuanzhuo@linux.alibaba.com, pbonzini@redhat.com,
jasowang@redhat.com, alok.a.tiwari@oracle.com, Max Gurtovoy,
Israel Rukshin
On Wed, Jun 25, 2025 at 11:08:42AM +0000, Parav Pandit wrote:
>
> > From: Michael S. Tsirkin <mst@redhat.com>
> > Sent: 25 June 2025 04:34 PM
> >
> > On Wed, Jun 25, 2025 at 02:55:27AM +0000, Parav Pandit wrote:
> > >
> > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > Sent: 25 June 2025 01:24 AM
> > > >
> > > > On Tue, Jun 24, 2025 at 07:11:29PM +0000, Parav Pandit wrote:
> > > > >
> > > > >
> > > > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > > > Sent: 25 June 2025 12:37 AM
> > > > > >
> > > > > > On Tue, Jun 24, 2025 at 07:01:44PM +0000, Parav Pandit wrote:
> > > > > > >
> > > > > > >
> > > > > > > > From: Stefan Hajnoczi <stefanha@redhat.com>
> > > > > > > > Sent: 25 June 2025 12:26 AM
> > > > > > > >
> > > > > > > > On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit wrote:
> > > > > > > > > When the PCI device is surprise removed, requests may not
> > > > > > > > > complete the device as the VQ is marked as broken. Due to
> > > > > > > > > this, the disk deletion hangs.
> > > > > > > >
> > > > > > > > There are loops in the core virtio driver code that expect
> > > > > > > > device register reads to eventually return 0:
> > > > > > > > drivers/virtio/virtio_pci_modern.c:vp_reset()
> > > > > > > > drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_queue_r
> > > > > > > > eset
> > > > > > > > ()
> > > > > > > >
> > > > > > > > Is there a hang if these loops are hit when a device has
> > > > > > > > been surprise removed? I'm trying to understand whether
> > > > > > > > surprise removal is fully supported or whether this patch is
> > > > > > > > one step in that
> > > > direction.
> > > > > > > >
> > > > > > > In one of the previous replies I answered to Michael, but
> > > > > > > don't have the link
> > > > > > handy.
> > > > > > > It is not fully supported by this patch. It will hang.
> > > > > > >
> > > > > > > This patch restores driver back to the same state what it was
> > > > > > > before the fixes
> > > > > > tag patch.
> > > > > > > The virtio stack level work is needed to support surprise
> > > > > > > removal, including
> > > > > > the reset flow you rightly pointed.
> > > > > >
> > > > > > Have plans to do that?
> > > > > >
> > > > > Didn't give enough thoughts on it yet.
> > > >
> > > > This one is kind of pointless then? It just fixes the specific race
> > > > window that your test harness happens to hit?
> > > >
> > > It was reported by Li from Baidu, whose tests failed.
> > > I missed to tag "reported-by" in v5. I had it until v4. :(
> > >
> > > > Maybe it's better to wait until someone does a comprehensive fix..
> > > >
> > > >
> > > Oh, I was under impression is that you wanted to step forward in discussion
> > of v4.
> > > If you prefer a comprehensive support across layers of virtio, I suggest you
> > should revert the cited patch in fixes tag.
> > >
> > > Otherwise, it is in degraded state as virtio never supported surprise removal.
> > > By reverting the cited patch (or with this fix), the requests and disk deletion
> > will not hang.
> >
> > But they will hung in virtio core on reset, will they not? The tests just do not
> > happen to trigger this?
> >
> It will hang if it a true surprise removal which no device did so far because it never worked.
> (or did, but always hung that no one reported yet)
>
> I am familiar with 2 or more PCI devices who reports surprise removal, which do not complete the requests but yet allows device reset flow.
> This is because device is still there on the PCI bus. Only via side band signals device removal was reported.
So why do we care about it so much? I think it's great this patch
exists, for example it makes it easier to test surprise removal
and find more bugs. But is it better to just have it hang
unconditionally? Are we now making a commitment that it's working -
one we don't seem to intend to implement?
> But I agree that for full support, virtio all layer changes would be needed as new functionality (without fixes tag :) ).
Or with a fixes tag - lots of people just use it as a signal to mean
"where can this be reasonably backported to".
> > > Please let me know if I should re-send to revert the patch listed in fixes tag.
> > >
> > > > > > > > Apart from that, I'm happy with the virtio_blk.c aspects of the
> > patch:
> > > > > > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> > > > > > > >
> > > > > > > Thanks.
> > > > > > >
> > > > > > > > >
> > > > > > > > > Fix it by aborting the requests when the VQ is broken.
> > > > > > > > >
> > > > > > > > > With this fix now fio completes swiftly.
> > > > > > > > > An alternative of IO timeout has been considered, however
> > > > > > > > > when the driver knows about unresponsive block device,
> > > > > > > > > swiftly clearing them enables users and upper layers to react
> > quickly.
> > > > > > > > >
> > > > > > > > > Verified with multiple device unplug iterations with
> > > > > > > > > pending requests in virtio used ring and some pending with the
> > device.
> > > > > > > > >
> > > > > > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal
> > > > > > > > > of virtio pci device")
> > > > > > > > > Cc: stable@vger.kernel.org
> > > > > > > > > Reported-by: Li RongQing <lirongqing@baidu.com>
> > > > > > > > > Closes:
> > > > > > > > > https://lore.kernel.org/virtualization/c45dd68698cd47238c5
> > > > > > > > > 5fb7
> > > > > > > > > 3ca9
> > > > > > > > > b474
> > > > > > > > > 1@baidu.com/
> > > > > > > > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> > > > > > > > > Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> > > > > > > > > Signed-off-by: Parav Pandit <parav@nvidia.com>
> > > > > > > > >
> > > > > > > > > ---
> > > > > > > > > v4->v5:
> > > > > > > > > - fixed comment style where comment to start with one
> > > > > > > > > empty line at start
> > > > > > > > > - Addressed comments from Alok
> > > > > > > > > - fixed typo in broken vq check
> > > > > > > > > v3->v4:
> > > > > > > > > - Addressed comments from Michael
> > > > > > > > > - renamed virtblk_request_cancel() to
> > > > > > > > > virtblk_complete_request_with_ioerr()
> > > > > > > > > - Added comments for virtblk_complete_request_with_ioerr()
> > > > > > > > > - Renamed virtblk_broken_device_cleanup() to
> > > > > > > > > virtblk_cleanup_broken_device()
> > > > > > > > > - Added comments for virtblk_cleanup_broken_device()
> > > > > > > > > - Moved the broken vq check in virtblk_remove()
> > > > > > > > > - Fixed comment style to have first empty line
> > > > > > > > > - replaced freezed to frozen
> > > > > > > > > - Fixed comments rephrased
> > > > > > > > >
> > > > > > > > > v2->v3:
> > > > > > > > > - Addressed comments from Michael
> > > > > > > > > - updated comment for synchronizing with callbacks
> > > > > > > > >
> > > > > > > > > v1->v2:
> > > > > > > > > - Addressed comments from Stephan
> > > > > > > > > - fixed spelling to 'waiting'
> > > > > > > > > - Addressed comments from Michael
> > > > > > > > > - Dropped checking broken vq from queue_rq() and queue_rqs()
> > > > > > > > > because it is checked in lower layer routines in virtio
> > > > > > > > > core
> > > > > > > > >
> > > > > > > > > v0->v1:
> > > > > > > > > - Fixed comments from Stefan to rename a cleanup function
> > > > > > > > > - Improved logic for handling any outstanding requests
> > > > > > > > > in bio layer
> > > > > > > > > - improved cancel callback to sync with ongoing done()
> > > > > > > > > ---
> > > > > > > > > drivers/block/virtio_blk.c | 95
> > > > > > > > > ++++++++++++++++++++++++++++++++++++++
> > > > > > > > > 1 file changed, 95 insertions(+)
> > > > > > > > >
> > > > > > > > > diff --git a/drivers/block/virtio_blk.c
> > > > > > > > > b/drivers/block/virtio_blk.c index
> > > > > > > > > 7cffea01d868..c5e383c0ac48
> > > > > > > > > 100644
> > > > > > > > > --- a/drivers/block/virtio_blk.c
> > > > > > > > > +++ b/drivers/block/virtio_blk.c
> > > > > > > > > @@ -1554,6 +1554,98 @@ static int virtblk_probe(struct
> > > > > > > > > virtio_device
> > > > > > > > *vdev)
> > > > > > > > > return err;
> > > > > > > > > }
> > > > > > > > >
> > > > > > > > > +/*
> > > > > > > > > + * If the vq is broken, device will not complete requests.
> > > > > > > > > + * So we do it for the device.
> > > > > > > > > + */
> > > > > > > > > +static bool virtblk_complete_request_with_ioerr(struct
> > > > > > > > > +request *rq, void *data) {
> > > > > > > > > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
> > > > > > > > > + struct virtio_blk *vblk = data;
> > > > > > > > > + struct virtio_blk_vq *vq;
> > > > > > > > > + unsigned long flags;
> > > > > > > > > +
> > > > > > > > > + vq = &vblk->vqs[rq->mq_hctx->queue_num];
> > > > > > > > > +
> > > > > > > > > + spin_lock_irqsave(&vq->lock, flags);
> > > > > > > > > +
> > > > > > > > > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
> > > > > > > > > + if (blk_mq_request_started(rq) &&
> > > > !blk_mq_request_completed(rq))
> > > > > > > > > + blk_mq_complete_request(rq);
> > > > > > > > > +
> > > > > > > > > + spin_unlock_irqrestore(&vq->lock, flags);
> > > > > > > > > + return true;
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > +/*
> > > > > > > > > + * If the device is broken, it will not use any buffers
> > > > > > > > > +and waiting
> > > > > > > > > + * for that to happen is pointless. We'll do the cleanup
> > > > > > > > > +in the driver,
> > > > > > > > > + * completing all requests for the device.
> > > > > > > > > + */
> > > > > > > > > +static void virtblk_cleanup_broken_device(struct virtio_blk *vblk)
> > {
> > > > > > > > > + struct request_queue *q = vblk->disk->queue;
> > > > > > > > > +
> > > > > > > > > + /*
> > > > > > > > > + * Start freezing the queue, so that new requests keeps
> > > > waiting at the
> > > > > > > > > + * door of bio_queue_enter(). We cannot fully freeze the
> > > > > > > > > +queue
> > > > > > > > because
> > > > > > > > > + * frozen queue is an empty queue and there are pending
> > > > requests, so
> > > > > > > > > + * only start freezing it.
> > > > > > > > > + */
> > > > > > > > > + blk_freeze_queue_start(q);
> > > > > > > > > +
> > > > > > > > > + /*
> > > > > > > > > + * When quiescing completes, all ongoing dispatches have
> > > > completed
> > > > > > > > > + * and no new dispatch will happen towards the driver.
> > > > > > > > > + */
> > > > > > > > > + blk_mq_quiesce_queue(q);
> > > > > > > > > +
> > > > > > > > > + /*
> > > > > > > > > + * Synchronize with any ongoing VQ callbacks that may
> > > > > > > > > +have
> > > > started
> > > > > > > > > + * before the VQs were marked as broken. Any outstanding
> > > > requests
> > > > > > > > > + * will be completed by
> > > > virtblk_complete_request_with_ioerr().
> > > > > > > > > + */
> > > > > > > > > + virtio_synchronize_cbs(vblk->vdev);
> > > > > > > > > +
> > > > > > > > > + /*
> > > > > > > > > + * At this point, no new requests can enter the
> > > > > > > > > +queue_rq()
> > > > and
> > > > > > > > > + * completion routine will not complete any new requests
> > > > > > > > > +either for
> > > > > > > > the
> > > > > > > > > + * broken vq. Hence, it is safe to cancel all requests which are
> > > > > > > > > + * started.
> > > > > > > > > + */
> > > > > > > > > + blk_mq_tagset_busy_iter(&vblk->tag_set,
> > > > > > > > > + virtblk_complete_request_with_ioerr,
> > > > vblk);
> > > > > > > > > + blk_mq_tagset_wait_completed_request(&vblk->tag_set);
> > > > > > > > > +
> > > > > > > > > + /*
> > > > > > > > > + * All pending requests are cleaned up. Time to resume
> > > > > > > > > +so
> > > > that disk
> > > > > > > > > + * deletion can be smooth. Start the HW queues so that
> > > > > > > > > +when queue
> > > > > > > > is
> > > > > > > > > + * unquiesced requests can again enter the driver.
> > > > > > > > > + */
> > > > > > > > > + blk_mq_start_stopped_hw_queues(q, true);
> > > > > > > > > +
> > > > > > > > > + /*
> > > > > > > > > + * Unquiescing will trigger dispatching any pending
> > > > > > > > > +requests
> > > > to the
> > > > > > > > > + * driver which has crossed bio_queue_enter() to the driver.
> > > > > > > > > + */
> > > > > > > > > + blk_mq_unquiesce_queue(q);
> > > > > > > > > +
> > > > > > > > > + /*
> > > > > > > > > + * Wait for all pending dispatches to terminate which
> > > > > > > > > +may
> > > > have been
> > > > > > > > > + * initiated after unquiescing.
> > > > > > > > > + */
> > > > > > > > > + blk_mq_freeze_queue_wait(q);
> > > > > > > > > +
> > > > > > > > > + /*
> > > > > > > > > + * Mark the disk dead so that once we unfreeze the
> > > > > > > > > +queue,
> > > > requests
> > > > > > > > > + * waiting at the door of bio_queue_enter() can be
> > > > > > > > > +aborted right
> > > > > > > > away.
> > > > > > > > > + */
> > > > > > > > > + blk_mark_disk_dead(vblk->disk);
> > > > > > > > > +
> > > > > > > > > + /* Unfreeze the queue so that any waiting requests will
> > > > > > > > > +be
> > > > aborted. */
> > > > > > > > > + blk_mq_unfreeze_queue_nomemrestore(q);
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > > static void virtblk_remove(struct virtio_device *vdev) {
> > > > > > > > > struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1653,9
> > > > > > > > > @@ static void virtblk_remove(struct virtio_device *vdev)
> > > > > > > > > /* Make sure no work handler is accessing the device. */
> > > > > > > > > flush_work(&vblk->config_work);
> > > > > > > > >
> > > > > > > > > + if (virtqueue_is_broken(vblk->vqs[0].vq))
> > > > > > > > > + virtblk_cleanup_broken_device(vblk);
> > > > > > > > > +
> > > > > > > > > del_gendisk(vblk->disk);
> > > > > > > > > blk_mq_free_tag_set(&vblk->tag_set);
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > > 2.34.1
> > > > > > > > >
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal
2025-06-24 18:56 ` Stefan Hajnoczi
2025-06-24 19:01 ` Parav Pandit
2025-06-24 19:06 ` Michael S. Tsirkin
@ 2025-06-25 12:39 ` Michael S. Tsirkin
2025-06-25 19:01 ` Parav Pandit
2 siblings, 1 reply; 24+ messages in thread
From: Michael S. Tsirkin @ 2025-06-25 12:39 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: Parav Pandit, axboe, virtualization, linux-block, stable,
lirongqing, kch, xuanzhuo, pbonzini, jasowang, alok.a.tiwari,
Max Gurtovoy, Israel Rukshin
On Tue, Jun 24, 2025 at 02:56:22PM -0400, Stefan Hajnoczi wrote:
> On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit wrote:
> > When the PCI device is surprise removed, requests may not complete
> > the device as the VQ is marked as broken. Due to this, the disk
> > deletion hangs.
>
> There are loops in the core virtio driver code that expect device
> register reads to eventually return 0:
> drivers/virtio/virtio_pci_modern.c:vp_reset()
> drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_queue_reset()
>
> Is there a hang if these loops are hit when a device has been surprise
> removed? I'm trying to understand whether surprise removal is fully
> supported or whether this patch is one step in that direction.
>
> Apart from that, I'm happy with the virtio_blk.c aspects of the patch:
> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Is this as simple as this?
-->
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
---
diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c
index 7182f43ed055..df983fa9046a 100644
--- a/drivers/virtio/virtio_pci_modern.c
+++ b/drivers/virtio/virtio_pci_modern.c
@@ -555,8 +555,12 @@ static void vp_reset(struct virtio_device *vdev)
* This will flush out the status write, and flush in device writes,
* including MSI-X interrupts, if any.
*/
- while (vp_modern_get_status(mdev))
+ while (vp_modern_get_status(mdev)) {
+ /* If device is removed meanwhile, it will never respond. */
+ if (!pci_device_is_present(vp_dev->pci_dev))
+ break;
msleep(1);
+ }
vp_modern_avq_cleanup(vdev);
diff --git a/drivers/virtio/virtio_pci_modern_dev.c b/drivers/virtio/virtio_pci_modern_dev.c
index 0d3dbfaf4b23..7177ce0d63be 100644
--- a/drivers/virtio/virtio_pci_modern_dev.c
+++ b/drivers/virtio/virtio_pci_modern_dev.c
@@ -523,11 +523,19 @@ void vp_modern_set_queue_reset(struct virtio_pci_modern_device *mdev, u16 index)
vp_iowrite16(index, &cfg->cfg.queue_select);
vp_iowrite16(1, &cfg->queue_reset);
- while (vp_ioread16(&cfg->queue_reset))
+ while (vp_ioread16(&cfg->queue_reset)) {
+ /* If device is removed meanwhile, it will never respond. */
+ if (!pci_device_is_present(vp_dev->pci_dev))
+ break;
msleep(1);
+ }
- while (vp_ioread16(&cfg->cfg.queue_enable))
+ while (vp_ioread16(&cfg->cfg.queue_enable)) {
+ /* If device is removed meanwhile, it will never respond. */
+ if (!pci_device_is_present(vp_dev->pci_dev))
+ break;
msleep(1);
+ }
}
EXPORT_SYMBOL_GPL(vp_modern_set_queue_reset);
^ permalink raw reply related [flat|nested] 24+ messages in thread
* RE: [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal
2025-06-25 12:39 ` Michael S. Tsirkin
@ 2025-06-25 19:01 ` Parav Pandit
0 siblings, 0 replies; 24+ messages in thread
From: Parav Pandit @ 2025-06-25 19:01 UTC (permalink / raw)
To: Michael S. Tsirkin, Stefan Hajnoczi
Cc: axboe@kernel.dk, virtualization@lists.linux.dev,
linux-block@vger.kernel.org, stable@vger.kernel.org,
NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni,
xuanzhuo@linux.alibaba.com, pbonzini@redhat.com,
jasowang@redhat.com, alok.a.tiwari@oracle.com, Max Gurtovoy,
Israel Rukshin
> From: Michael S. Tsirkin <mst@redhat.com>
> Sent: 25 June 2025 06:09 PM
>
> On Tue, Jun 24, 2025 at 02:56:22PM -0400, Stefan Hajnoczi wrote:
> > On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit wrote:
> > > When the PCI device is surprise removed, requests may not complete
> > > the device as the VQ is marked as broken. Due to this, the disk
> > > deletion hangs.
> >
> > There are loops in the core virtio driver code that expect device
> > register reads to eventually return 0:
> > drivers/virtio/virtio_pci_modern.c:vp_reset()
> > drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_queue_reset()
> >
> > Is there a hang if these loops are hit when a device has been surprise
> > removed? I'm trying to understand whether surprise removal is fully
> > supported or whether this patch is one step in that direction.
> >
> > Apart from that, I'm happy with the virtio_blk.c aspects of the patch:
> > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
>
> Is this as simple as this?
>
Didn't audit the code where else the changes needed.
I had similar change a while ago, but I also recall hitting an assert in the pci layer somewhere during the vp_reset() flow.
Do not have lab access today. Will check back tomorrow and update.
> -->
>
>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
>
> ---
>
> diff --git a/drivers/virtio/virtio_pci_modern.c
> b/drivers/virtio/virtio_pci_modern.c
> index 7182f43ed055..df983fa9046a 100644
> --- a/drivers/virtio/virtio_pci_modern.c
> +++ b/drivers/virtio/virtio_pci_modern.c
> @@ -555,8 +555,12 @@ static void vp_reset(struct virtio_device *vdev)
> * This will flush out the status write, and flush in device writes,
> * including MSI-X interrupts, if any.
> */
> - while (vp_modern_get_status(mdev))
> + while (vp_modern_get_status(mdev)) {
> + /* If device is removed meanwhile, it will never respond. */
> + if (!pci_device_is_present(vp_dev->pci_dev))
> + break;
> msleep(1);
> + }
>
> vp_modern_avq_cleanup(vdev);
>
> diff --git a/drivers/virtio/virtio_pci_modern_dev.c
> b/drivers/virtio/virtio_pci_modern_dev.c
> index 0d3dbfaf4b23..7177ce0d63be 100644
> --- a/drivers/virtio/virtio_pci_modern_dev.c
> +++ b/drivers/virtio/virtio_pci_modern_dev.c
> @@ -523,11 +523,19 @@ void vp_modern_set_queue_reset(struct
> virtio_pci_modern_device *mdev, u16 index)
> vp_iowrite16(index, &cfg->cfg.queue_select);
> vp_iowrite16(1, &cfg->queue_reset);
>
> - while (vp_ioread16(&cfg->queue_reset))
> + while (vp_ioread16(&cfg->queue_reset)) {
> + /* If device is removed meanwhile, it will never respond. */
> + if (!pci_device_is_present(vp_dev->pci_dev))
> + break;
> msleep(1);
> + }
>
> - while (vp_ioread16(&cfg->cfg.queue_enable))
> + while (vp_ioread16(&cfg->cfg.queue_enable)) {
> + /* If device is removed meanwhile, it will never respond. */
> + if (!pci_device_is_present(vp_dev->pci_dev))
> + break;
> msleep(1);
> + }
> }
> EXPORT_SYMBOL_GPL(vp_modern_set_queue_reset);
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal
2025-06-25 11:44 ` Michael S. Tsirkin
@ 2025-06-25 19:08 ` Parav Pandit
2025-06-25 19:22 ` Michael S. Tsirkin
0 siblings, 1 reply; 24+ messages in thread
From: Parav Pandit @ 2025-06-25 19:08 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Stefan Hajnoczi, axboe@kernel.dk, virtualization@lists.linux.dev,
linux-block@vger.kernel.org, stable@vger.kernel.org,
NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni,
xuanzhuo@linux.alibaba.com, pbonzini@redhat.com,
jasowang@redhat.com, alok.a.tiwari@oracle.com, Max Gurtovoy,
Israel Rukshin
> From: Michael S. Tsirkin <mst@redhat.com>
> Sent: 25 June 2025 05:15 PM
>
> On Wed, Jun 25, 2025 at 11:08:42AM +0000, Parav Pandit wrote:
> >
> > > From: Michael S. Tsirkin <mst@redhat.com>
> > > Sent: 25 June 2025 04:34 PM
> > >
> > > On Wed, Jun 25, 2025 at 02:55:27AM +0000, Parav Pandit wrote:
> > > >
> > > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > > Sent: 25 June 2025 01:24 AM
> > > > >
> > > > > On Tue, Jun 24, 2025 at 07:11:29PM +0000, Parav Pandit wrote:
> > > > > >
> > > > > >
> > > > > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > > > > Sent: 25 June 2025 12:37 AM
> > > > > > >
> > > > > > > On Tue, Jun 24, 2025 at 07:01:44PM +0000, Parav Pandit wrote:
> > > > > > > >
> > > > > > > >
> > > > > > > > > From: Stefan Hajnoczi <stefanha@redhat.com>
> > > > > > > > > Sent: 25 June 2025 12:26 AM
> > > > > > > > >
> > > > > > > > > On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit
> wrote:
> > > > > > > > > > When the PCI device is surprise removed, requests may
> > > > > > > > > > not complete the device as the VQ is marked as broken.
> > > > > > > > > > Due to this, the disk deletion hangs.
> > > > > > > > >
> > > > > > > > > There are loops in the core virtio driver code that
> > > > > > > > > expect device register reads to eventually return 0:
> > > > > > > > > drivers/virtio/virtio_pci_modern.c:vp_reset()
> > > > > > > > > drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_que
> > > > > > > > > ue_r
> > > > > > > > > eset
> > > > > > > > > ()
> > > > > > > > >
> > > > > > > > > Is there a hang if these loops are hit when a device has
> > > > > > > > > been surprise removed? I'm trying to understand whether
> > > > > > > > > surprise removal is fully supported or whether this
> > > > > > > > > patch is one step in that
> > > > > direction.
> > > > > > > > >
> > > > > > > > In one of the previous replies I answered to Michael, but
> > > > > > > > don't have the link
> > > > > > > handy.
> > > > > > > > It is not fully supported by this patch. It will hang.
> > > > > > > >
> > > > > > > > This patch restores driver back to the same state what it
> > > > > > > > was before the fixes
> > > > > > > tag patch.
> > > > > > > > The virtio stack level work is needed to support surprise
> > > > > > > > removal, including
> > > > > > > the reset flow you rightly pointed.
> > > > > > >
> > > > > > > Have plans to do that?
> > > > > > >
> > > > > > Didn't give enough thoughts on it yet.
> > > > >
> > > > > This one is kind of pointless then? It just fixes the specific
> > > > > race window that your test harness happens to hit?
> > > > >
> > > > It was reported by Li from Baidu, whose tests failed.
> > > > I missed to tag "reported-by" in v5. I had it until v4. :(
> > > >
> > > > > Maybe it's better to wait until someone does a comprehensive fix..
> > > > >
> > > > >
> > > > Oh, I was under impression is that you wanted to step forward in
> > > > discussion
> > > of v4.
> > > > If you prefer a comprehensive support across layers of virtio, I
> > > > suggest you
> > > should revert the cited patch in fixes tag.
> > > >
> > > > Otherwise, it is in degraded state as virtio never supported surprise
> removal.
> > > > By reverting the cited patch (or with this fix), the requests and
> > > > disk deletion
> > > will not hang.
> > >
> > > But they will hung in virtio core on reset, will they not? The tests
> > > just do not happen to trigger this?
> > >
> > It will hang if it a true surprise removal which no device did so far because it
> never worked.
> > (or did, but always hung that no one reported yet)
> >
> > I am familiar with 2 or more PCI devices who reports surprise removal,
> which do not complete the requests but yet allows device reset flow.
> > This is because device is still there on the PCI bus. Only via side band signals
> device removal was reported.
>
> So why do we care about it so much? I think it's great this patch exists, for
> example it makes it easier to test surprise removal and find more bugs. But is
> it better to just have it hang unconditionally? Are we now making a
> commitment that it's working - one we don't seem to intend to implement?
>
The patch improves the situation from its current state.
But as you posted, more changes in pci layer are needed.
I didn't audit where else it may be needed.
vp_reset() may need to return the status back of successful/failure reset.
Otherwise during probe(), vp_reset() aborts the reset and attempts to load the driver for removed device.
I guess suspend() callback also infinitely waits during freezing the queue also needs adaptation.
> > But I agree that for full support, virtio all layer changes would be needed as
> new functionality (without fixes tag :) ).
>
>
> Or with a fixes tag - lots of people just use it as a signal to mean "where can
> this be reasonably backported to".
>
Yes, I think the fix for the older kernels is needed, hence I cced stable too.
>
> > > > Please let me know if I should re-send to revert the patch listed in fixes
> tag.
> > > >
> > > > > > > > > Apart from that, I'm happy with the virtio_blk.c aspects
> > > > > > > > > of the
> > > patch:
> > > > > > > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> > > > > > > > >
> > > > > > > > Thanks.
> > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Fix it by aborting the requests when the VQ is broken.
> > > > > > > > > >
> > > > > > > > > > With this fix now fio completes swiftly.
> > > > > > > > > > An alternative of IO timeout has been considered,
> > > > > > > > > > however when the driver knows about unresponsive block
> > > > > > > > > > device, swiftly clearing them enables users and upper
> > > > > > > > > > layers to react
> > > quickly.
> > > > > > > > > >
> > > > > > > > > > Verified with multiple device unplug iterations with
> > > > > > > > > > pending requests in virtio used ring and some pending
> > > > > > > > > > with the
> > > device.
> > > > > > > > > >
> > > > > > > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise
> > > > > > > > > > removal of virtio pci device")
> > > > > > > > > > Cc: stable@vger.kernel.org
> > > > > > > > > > Reported-by: Li RongQing <lirongqing@baidu.com>
> > > > > > > > > > Closes:
> > > > > > > > > > https://lore.kernel.org/virtualization/c45dd68698cd472
> > > > > > > > > > 38c5
> > > > > > > > > > 5fb7
> > > > > > > > > > 3ca9
> > > > > > > > > > b474
> > > > > > > > > > 1@baidu.com/
> > > > > > > > > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> > > > > > > > > > Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> > > > > > > > > > Signed-off-by: Parav Pandit <parav@nvidia.com>
> > > > > > > > > >
> > > > > > > > > > ---
> > > > > > > > > > v4->v5:
> > > > > > > > > > - fixed comment style where comment to start with one
> > > > > > > > > > empty line at start
> > > > > > > > > > - Addressed comments from Alok
> > > > > > > > > > - fixed typo in broken vq check
> > > > > > > > > > v3->v4:
> > > > > > > > > > - Addressed comments from Michael
> > > > > > > > > > - renamed virtblk_request_cancel() to
> > > > > > > > > > virtblk_complete_request_with_ioerr()
> > > > > > > > > > - Added comments for
> > > > > > > > > > virtblk_complete_request_with_ioerr()
> > > > > > > > > > - Renamed virtblk_broken_device_cleanup() to
> > > > > > > > > > virtblk_cleanup_broken_device()
> > > > > > > > > > - Added comments for virtblk_cleanup_broken_device()
> > > > > > > > > > - Moved the broken vq check in virtblk_remove()
> > > > > > > > > > - Fixed comment style to have first empty line
> > > > > > > > > > - replaced freezed to frozen
> > > > > > > > > > - Fixed comments rephrased
> > > > > > > > > >
> > > > > > > > > > v2->v3:
> > > > > > > > > > - Addressed comments from Michael
> > > > > > > > > > - updated comment for synchronizing with callbacks
> > > > > > > > > >
> > > > > > > > > > v1->v2:
> > > > > > > > > > - Addressed comments from Stephan
> > > > > > > > > > - fixed spelling to 'waiting'
> > > > > > > > > > - Addressed comments from Michael
> > > > > > > > > > - Dropped checking broken vq from queue_rq() and
> queue_rqs()
> > > > > > > > > > because it is checked in lower layer routines in
> > > > > > > > > > virtio core
> > > > > > > > > >
> > > > > > > > > > v0->v1:
> > > > > > > > > > - Fixed comments from Stefan to rename a cleanup
> > > > > > > > > > function
> > > > > > > > > > - Improved logic for handling any outstanding requests
> > > > > > > > > > in bio layer
> > > > > > > > > > - improved cancel callback to sync with ongoing done()
> > > > > > > > > > ---
> > > > > > > > > > drivers/block/virtio_blk.c | 95
> > > > > > > > > > ++++++++++++++++++++++++++++++++++++++
> > > > > > > > > > 1 file changed, 95 insertions(+)
> > > > > > > > > >
> > > > > > > > > > diff --git a/drivers/block/virtio_blk.c
> > > > > > > > > > b/drivers/block/virtio_blk.c index
> > > > > > > > > > 7cffea01d868..c5e383c0ac48
> > > > > > > > > > 100644
> > > > > > > > > > --- a/drivers/block/virtio_blk.c
> > > > > > > > > > +++ b/drivers/block/virtio_blk.c
> > > > > > > > > > @@ -1554,6 +1554,98 @@ static int virtblk_probe(struct
> > > > > > > > > > virtio_device
> > > > > > > > > *vdev)
> > > > > > > > > > return err;
> > > > > > > > > > }
> > > > > > > > > >
> > > > > > > > > > +/*
> > > > > > > > > > + * If the vq is broken, device will not complete requests.
> > > > > > > > > > + * So we do it for the device.
> > > > > > > > > > + */
> > > > > > > > > > +static bool
> > > > > > > > > > +virtblk_complete_request_with_ioerr(struct
> > > > > > > > > > +request *rq, void *data) {
> > > > > > > > > > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
> > > > > > > > > > + struct virtio_blk *vblk = data;
> > > > > > > > > > + struct virtio_blk_vq *vq;
> > > > > > > > > > + unsigned long flags;
> > > > > > > > > > +
> > > > > > > > > > + vq = &vblk->vqs[rq->mq_hctx->queue_num];
> > > > > > > > > > +
> > > > > > > > > > + spin_lock_irqsave(&vq->lock, flags);
> > > > > > > > > > +
> > > > > > > > > > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
> > > > > > > > > > + if (blk_mq_request_started(rq) &&
> > > > > !blk_mq_request_completed(rq))
> > > > > > > > > > + blk_mq_complete_request(rq);
> > > > > > > > > > +
> > > > > > > > > > + spin_unlock_irqrestore(&vq->lock, flags);
> > > > > > > > > > + return true;
> > > > > > > > > > +}
> > > > > > > > > > +
> > > > > > > > > > +/*
> > > > > > > > > > + * If the device is broken, it will not use any
> > > > > > > > > > +buffers and waiting
> > > > > > > > > > + * for that to happen is pointless. We'll do the
> > > > > > > > > > +cleanup in the driver,
> > > > > > > > > > + * completing all requests for the device.
> > > > > > > > > > + */
> > > > > > > > > > +static void virtblk_cleanup_broken_device(struct
> > > > > > > > > > +virtio_blk *vblk)
> > > {
> > > > > > > > > > + struct request_queue *q = vblk->disk->queue;
> > > > > > > > > > +
> > > > > > > > > > + /*
> > > > > > > > > > + * Start freezing the queue, so that new requests
> > > > > > > > > > +keeps
> > > > > waiting at the
> > > > > > > > > > + * door of bio_queue_enter(). We cannot fully freeze
> > > > > > > > > > +the queue
> > > > > > > > > because
> > > > > > > > > > + * frozen queue is an empty queue and there are
> > > > > > > > > > +pending
> > > > > requests, so
> > > > > > > > > > + * only start freezing it.
> > > > > > > > > > + */
> > > > > > > > > > + blk_freeze_queue_start(q);
> > > > > > > > > > +
> > > > > > > > > > + /*
> > > > > > > > > > + * When quiescing completes, all ongoing dispatches
> > > > > > > > > > +have
> > > > > completed
> > > > > > > > > > + * and no new dispatch will happen towards the
> driver.
> > > > > > > > > > + */
> > > > > > > > > > + blk_mq_quiesce_queue(q);
> > > > > > > > > > +
> > > > > > > > > > + /*
> > > > > > > > > > + * Synchronize with any ongoing VQ callbacks that
> > > > > > > > > > +may have
> > > > > started
> > > > > > > > > > + * before the VQs were marked as broken. Any
> > > > > > > > > > +outstanding
> > > > > requests
> > > > > > > > > > + * will be completed by
> > > > > virtblk_complete_request_with_ioerr().
> > > > > > > > > > + */
> > > > > > > > > > + virtio_synchronize_cbs(vblk->vdev);
> > > > > > > > > > +
> > > > > > > > > > + /*
> > > > > > > > > > + * At this point, no new requests can enter the
> > > > > > > > > > +queue_rq()
> > > > > and
> > > > > > > > > > + * completion routine will not complete any new
> > > > > > > > > > +requests either for
> > > > > > > > > the
> > > > > > > > > > + * broken vq. Hence, it is safe to cancel all requests
> which are
> > > > > > > > > > + * started.
> > > > > > > > > > + */
> > > > > > > > > > + blk_mq_tagset_busy_iter(&vblk->tag_set,
> > > > > > > > > > +
> virtblk_complete_request_with_ioerr,
> > > > > vblk);
> > > > > > > > > > +
> > > > > > > > > > +blk_mq_tagset_wait_completed_request(&vblk->tag_set);
> > > > > > > > > > +
> > > > > > > > > > + /*
> > > > > > > > > > + * All pending requests are cleaned up. Time to
> > > > > > > > > > +resume so
> > > > > that disk
> > > > > > > > > > + * deletion can be smooth. Start the HW queues so
> > > > > > > > > > +that when queue
> > > > > > > > > is
> > > > > > > > > > + * unquiesced requests can again enter the driver.
> > > > > > > > > > + */
> > > > > > > > > > + blk_mq_start_stopped_hw_queues(q, true);
> > > > > > > > > > +
> > > > > > > > > > + /*
> > > > > > > > > > + * Unquiescing will trigger dispatching any pending
> > > > > > > > > > +requests
> > > > > to the
> > > > > > > > > > + * driver which has crossed bio_queue_enter() to the
> driver.
> > > > > > > > > > + */
> > > > > > > > > > + blk_mq_unquiesce_queue(q);
> > > > > > > > > > +
> > > > > > > > > > + /*
> > > > > > > > > > + * Wait for all pending dispatches to terminate
> > > > > > > > > > +which may
> > > > > have been
> > > > > > > > > > + * initiated after unquiescing.
> > > > > > > > > > + */
> > > > > > > > > > + blk_mq_freeze_queue_wait(q);
> > > > > > > > > > +
> > > > > > > > > > + /*
> > > > > > > > > > + * Mark the disk dead so that once we unfreeze the
> > > > > > > > > > +queue,
> > > > > requests
> > > > > > > > > > + * waiting at the door of bio_queue_enter() can be
> > > > > > > > > > +aborted right
> > > > > > > > > away.
> > > > > > > > > > + */
> > > > > > > > > > + blk_mark_disk_dead(vblk->disk);
> > > > > > > > > > +
> > > > > > > > > > + /* Unfreeze the queue so that any waiting requests
> > > > > > > > > > +will be
> > > > > aborted. */
> > > > > > > > > > + blk_mq_unfreeze_queue_nomemrestore(q);
> > > > > > > > > > +}
> > > > > > > > > > +
> > > > > > > > > > static void virtblk_remove(struct virtio_device *vdev) {
> > > > > > > > > > struct virtio_blk *vblk = vdev->priv; @@ -1561,6
> > > > > > > > > > +1653,9 @@ static void virtblk_remove(struct virtio_device
> *vdev)
> > > > > > > > > > /* Make sure no work handler is accessing the device.
> */
> > > > > > > > > > flush_work(&vblk->config_work);
> > > > > > > > > >
> > > > > > > > > > + if (virtqueue_is_broken(vblk->vqs[0].vq))
> > > > > > > > > > + virtblk_cleanup_broken_device(vblk);
> > > > > > > > > > +
> > > > > > > > > > del_gendisk(vblk->disk);
> > > > > > > > > > blk_mq_free_tag_set(&vblk->tag_set);
> > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > > 2.34.1
> > > > > > > > > >
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal
2025-06-25 19:08 ` Parav Pandit
@ 2025-06-25 19:22 ` Michael S. Tsirkin
2025-06-26 3:26 ` Parav Pandit
0 siblings, 1 reply; 24+ messages in thread
From: Michael S. Tsirkin @ 2025-06-25 19:22 UTC (permalink / raw)
To: Parav Pandit
Cc: Stefan Hajnoczi, axboe@kernel.dk, virtualization@lists.linux.dev,
linux-block@vger.kernel.org, stable@vger.kernel.org,
NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni,
xuanzhuo@linux.alibaba.com, pbonzini@redhat.com,
jasowang@redhat.com, alok.a.tiwari@oracle.com, Max Gurtovoy,
Israel Rukshin
On Wed, Jun 25, 2025 at 07:08:54PM +0000, Parav Pandit wrote:
>
> > From: Michael S. Tsirkin <mst@redhat.com>
> > Sent: 25 June 2025 05:15 PM
> >
> > On Wed, Jun 25, 2025 at 11:08:42AM +0000, Parav Pandit wrote:
> > >
> > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > Sent: 25 June 2025 04:34 PM
> > > >
> > > > On Wed, Jun 25, 2025 at 02:55:27AM +0000, Parav Pandit wrote:
> > > > >
> > > > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > > > Sent: 25 June 2025 01:24 AM
> > > > > >
> > > > > > On Tue, Jun 24, 2025 at 07:11:29PM +0000, Parav Pandit wrote:
> > > > > > >
> > > > > > >
> > > > > > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > > > > > Sent: 25 June 2025 12:37 AM
> > > > > > > >
> > > > > > > > On Tue, Jun 24, 2025 at 07:01:44PM +0000, Parav Pandit wrote:
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > > From: Stefan Hajnoczi <stefanha@redhat.com>
> > > > > > > > > > Sent: 25 June 2025 12:26 AM
> > > > > > > > > >
> > > > > > > > > > On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav Pandit
> > wrote:
> > > > > > > > > > > When the PCI device is surprise removed, requests may
> > > > > > > > > > > not complete the device as the VQ is marked as broken.
> > > > > > > > > > > Due to this, the disk deletion hangs.
> > > > > > > > > >
> > > > > > > > > > There are loops in the core virtio driver code that
> > > > > > > > > > expect device register reads to eventually return 0:
> > > > > > > > > > drivers/virtio/virtio_pci_modern.c:vp_reset()
> > > > > > > > > > drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set_que
> > > > > > > > > > ue_r
> > > > > > > > > > eset
> > > > > > > > > > ()
> > > > > > > > > >
> > > > > > > > > > Is there a hang if these loops are hit when a device has
> > > > > > > > > > been surprise removed? I'm trying to understand whether
> > > > > > > > > > surprise removal is fully supported or whether this
> > > > > > > > > > patch is one step in that
> > > > > > direction.
> > > > > > > > > >
> > > > > > > > > In one of the previous replies I answered to Michael, but
> > > > > > > > > don't have the link
> > > > > > > > handy.
> > > > > > > > > It is not fully supported by this patch. It will hang.
> > > > > > > > >
> > > > > > > > > This patch restores driver back to the same state what it
> > > > > > > > > was before the fixes
> > > > > > > > tag patch.
> > > > > > > > > The virtio stack level work is needed to support surprise
> > > > > > > > > removal, including
> > > > > > > > the reset flow you rightly pointed.
> > > > > > > >
> > > > > > > > Have plans to do that?
> > > > > > > >
> > > > > > > Didn't give enough thoughts on it yet.
> > > > > >
> > > > > > This one is kind of pointless then? It just fixes the specific
> > > > > > race window that your test harness happens to hit?
> > > > > >
> > > > > It was reported by Li from Baidu, whose tests failed.
> > > > > I missed to tag "reported-by" in v5. I had it until v4. :(
> > > > >
> > > > > > Maybe it's better to wait until someone does a comprehensive fix..
> > > > > >
> > > > > >
> > > > > Oh, I was under impression is that you wanted to step forward in
> > > > > discussion
> > > > of v4.
> > > > > If you prefer a comprehensive support across layers of virtio, I
> > > > > suggest you
> > > > should revert the cited patch in fixes tag.
> > > > >
> > > > > Otherwise, it is in degraded state as virtio never supported surprise
> > removal.
> > > > > By reverting the cited patch (or with this fix), the requests and
> > > > > disk deletion
> > > > will not hang.
> > > >
> > > > But they will hung in virtio core on reset, will they not? The tests
> > > > just do not happen to trigger this?
> > > >
> > > It will hang if it a true surprise removal which no device did so far because it
> > never worked.
> > > (or did, but always hung that no one reported yet)
> > >
> > > I am familiar with 2 or more PCI devices who reports surprise removal,
> > which do not complete the requests but yet allows device reset flow.
> > > This is because device is still there on the PCI bus. Only via side band signals
> > device removal was reported.
> >
> > So why do we care about it so much? I think it's great this patch exists, for
> > example it makes it easier to test surprise removal and find more bugs. But is
> > it better to just have it hang unconditionally? Are we now making a
> > commitment that it's working - one we don't seem to intend to implement?
> >
> The patch improves the situation from its current state.
> But as you posted, more changes in pci layer are needed.
> I didn't audit where else it may be needed.
>
> vp_reset() may need to return the status back of successful/failure reset.
> Otherwise during probe(), vp_reset() aborts the reset and attempts to load the driver for removed device.
yes however this is not at all different that hotunplug right after
reset.
> I guess suspend() callback also infinitely waits during freezing the queue also needs adaptation.
Which callback is that I don't understand.
> > > But I agree that for full support, virtio all layer changes would be needed as
> > new functionality (without fixes tag :) ).
> >
> >
> > Or with a fixes tag - lots of people just use it as a signal to mean "where can
> > this be reasonably backported to".
> >
> Yes, I think the fix for the older kernels is needed, hence I cced stable too.
>
> >
> > > > > Please let me know if I should re-send to revert the patch listed in fixes
> > tag.
> > > > >
> > > > > > > > > > Apart from that, I'm happy with the virtio_blk.c aspects
> > > > > > > > > > of the
> > > > patch:
> > > > > > > > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> > > > > > > > > >
> > > > > > > > > Thanks.
> > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Fix it by aborting the requests when the VQ is broken.
> > > > > > > > > > >
> > > > > > > > > > > With this fix now fio completes swiftly.
> > > > > > > > > > > An alternative of IO timeout has been considered,
> > > > > > > > > > > however when the driver knows about unresponsive block
> > > > > > > > > > > device, swiftly clearing them enables users and upper
> > > > > > > > > > > layers to react
> > > > quickly.
> > > > > > > > > > >
> > > > > > > > > > > Verified with multiple device unplug iterations with
> > > > > > > > > > > pending requests in virtio used ring and some pending
> > > > > > > > > > > with the
> > > > device.
> > > > > > > > > > >
> > > > > > > > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise
> > > > > > > > > > > removal of virtio pci device")
> > > > > > > > > > > Cc: stable@vger.kernel.org
> > > > > > > > > > > Reported-by: Li RongQing <lirongqing@baidu.com>
> > > > > > > > > > > Closes:
> > > > > > > > > > > https://lore.kernel.org/virtualization/c45dd68698cd472
> > > > > > > > > > > 38c5
> > > > > > > > > > > 5fb7
> > > > > > > > > > > 3ca9
> > > > > > > > > > > b474
> > > > > > > > > > > 1@baidu.com/
> > > > > > > > > > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> > > > > > > > > > > Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> > > > > > > > > > > Signed-off-by: Parav Pandit <parav@nvidia.com>
> > > > > > > > > > >
> > > > > > > > > > > ---
> > > > > > > > > > > v4->v5:
> > > > > > > > > > > - fixed comment style where comment to start with one
> > > > > > > > > > > empty line at start
> > > > > > > > > > > - Addressed comments from Alok
> > > > > > > > > > > - fixed typo in broken vq check
> > > > > > > > > > > v3->v4:
> > > > > > > > > > > - Addressed comments from Michael
> > > > > > > > > > > - renamed virtblk_request_cancel() to
> > > > > > > > > > > virtblk_complete_request_with_ioerr()
> > > > > > > > > > > - Added comments for
> > > > > > > > > > > virtblk_complete_request_with_ioerr()
> > > > > > > > > > > - Renamed virtblk_broken_device_cleanup() to
> > > > > > > > > > > virtblk_cleanup_broken_device()
> > > > > > > > > > > - Added comments for virtblk_cleanup_broken_device()
> > > > > > > > > > > - Moved the broken vq check in virtblk_remove()
> > > > > > > > > > > - Fixed comment style to have first empty line
> > > > > > > > > > > - replaced freezed to frozen
> > > > > > > > > > > - Fixed comments rephrased
> > > > > > > > > > >
> > > > > > > > > > > v2->v3:
> > > > > > > > > > > - Addressed comments from Michael
> > > > > > > > > > > - updated comment for synchronizing with callbacks
> > > > > > > > > > >
> > > > > > > > > > > v1->v2:
> > > > > > > > > > > - Addressed comments from Stephan
> > > > > > > > > > > - fixed spelling to 'waiting'
> > > > > > > > > > > - Addressed comments from Michael
> > > > > > > > > > > - Dropped checking broken vq from queue_rq() and
> > queue_rqs()
> > > > > > > > > > > because it is checked in lower layer routines in
> > > > > > > > > > > virtio core
> > > > > > > > > > >
> > > > > > > > > > > v0->v1:
> > > > > > > > > > > - Fixed comments from Stefan to rename a cleanup
> > > > > > > > > > > function
> > > > > > > > > > > - Improved logic for handling any outstanding requests
> > > > > > > > > > > in bio layer
> > > > > > > > > > > - improved cancel callback to sync with ongoing done()
> > > > > > > > > > > ---
> > > > > > > > > > > drivers/block/virtio_blk.c | 95
> > > > > > > > > > > ++++++++++++++++++++++++++++++++++++++
> > > > > > > > > > > 1 file changed, 95 insertions(+)
> > > > > > > > > > >
> > > > > > > > > > > diff --git a/drivers/block/virtio_blk.c
> > > > > > > > > > > b/drivers/block/virtio_blk.c index
> > > > > > > > > > > 7cffea01d868..c5e383c0ac48
> > > > > > > > > > > 100644
> > > > > > > > > > > --- a/drivers/block/virtio_blk.c
> > > > > > > > > > > +++ b/drivers/block/virtio_blk.c
> > > > > > > > > > > @@ -1554,6 +1554,98 @@ static int virtblk_probe(struct
> > > > > > > > > > > virtio_device
> > > > > > > > > > *vdev)
> > > > > > > > > > > return err;
> > > > > > > > > > > }
> > > > > > > > > > >
> > > > > > > > > > > +/*
> > > > > > > > > > > + * If the vq is broken, device will not complete requests.
> > > > > > > > > > > + * So we do it for the device.
> > > > > > > > > > > + */
> > > > > > > > > > > +static bool
> > > > > > > > > > > +virtblk_complete_request_with_ioerr(struct
> > > > > > > > > > > +request *rq, void *data) {
> > > > > > > > > > > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
> > > > > > > > > > > + struct virtio_blk *vblk = data;
> > > > > > > > > > > + struct virtio_blk_vq *vq;
> > > > > > > > > > > + unsigned long flags;
> > > > > > > > > > > +
> > > > > > > > > > > + vq = &vblk->vqs[rq->mq_hctx->queue_num];
> > > > > > > > > > > +
> > > > > > > > > > > + spin_lock_irqsave(&vq->lock, flags);
> > > > > > > > > > > +
> > > > > > > > > > > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
> > > > > > > > > > > + if (blk_mq_request_started(rq) &&
> > > > > > !blk_mq_request_completed(rq))
> > > > > > > > > > > + blk_mq_complete_request(rq);
> > > > > > > > > > > +
> > > > > > > > > > > + spin_unlock_irqrestore(&vq->lock, flags);
> > > > > > > > > > > + return true;
> > > > > > > > > > > +}
> > > > > > > > > > > +
> > > > > > > > > > > +/*
> > > > > > > > > > > + * If the device is broken, it will not use any
> > > > > > > > > > > +buffers and waiting
> > > > > > > > > > > + * for that to happen is pointless. We'll do the
> > > > > > > > > > > +cleanup in the driver,
> > > > > > > > > > > + * completing all requests for the device.
> > > > > > > > > > > + */
> > > > > > > > > > > +static void virtblk_cleanup_broken_device(struct
> > > > > > > > > > > +virtio_blk *vblk)
> > > > {
> > > > > > > > > > > + struct request_queue *q = vblk->disk->queue;
> > > > > > > > > > > +
> > > > > > > > > > > + /*
> > > > > > > > > > > + * Start freezing the queue, so that new requests
> > > > > > > > > > > +keeps
> > > > > > waiting at the
> > > > > > > > > > > + * door of bio_queue_enter(). We cannot fully freeze
> > > > > > > > > > > +the queue
> > > > > > > > > > because
> > > > > > > > > > > + * frozen queue is an empty queue and there are
> > > > > > > > > > > +pending
> > > > > > requests, so
> > > > > > > > > > > + * only start freezing it.
> > > > > > > > > > > + */
> > > > > > > > > > > + blk_freeze_queue_start(q);
> > > > > > > > > > > +
> > > > > > > > > > > + /*
> > > > > > > > > > > + * When quiescing completes, all ongoing dispatches
> > > > > > > > > > > +have
> > > > > > completed
> > > > > > > > > > > + * and no new dispatch will happen towards the
> > driver.
> > > > > > > > > > > + */
> > > > > > > > > > > + blk_mq_quiesce_queue(q);
> > > > > > > > > > > +
> > > > > > > > > > > + /*
> > > > > > > > > > > + * Synchronize with any ongoing VQ callbacks that
> > > > > > > > > > > +may have
> > > > > > started
> > > > > > > > > > > + * before the VQs were marked as broken. Any
> > > > > > > > > > > +outstanding
> > > > > > requests
> > > > > > > > > > > + * will be completed by
> > > > > > virtblk_complete_request_with_ioerr().
> > > > > > > > > > > + */
> > > > > > > > > > > + virtio_synchronize_cbs(vblk->vdev);
> > > > > > > > > > > +
> > > > > > > > > > > + /*
> > > > > > > > > > > + * At this point, no new requests can enter the
> > > > > > > > > > > +queue_rq()
> > > > > > and
> > > > > > > > > > > + * completion routine will not complete any new
> > > > > > > > > > > +requests either for
> > > > > > > > > > the
> > > > > > > > > > > + * broken vq. Hence, it is safe to cancel all requests
> > which are
> > > > > > > > > > > + * started.
> > > > > > > > > > > + */
> > > > > > > > > > > + blk_mq_tagset_busy_iter(&vblk->tag_set,
> > > > > > > > > > > +
> > virtblk_complete_request_with_ioerr,
> > > > > > vblk);
> > > > > > > > > > > +
> > > > > > > > > > > +blk_mq_tagset_wait_completed_request(&vblk->tag_set);
> > > > > > > > > > > +
> > > > > > > > > > > + /*
> > > > > > > > > > > + * All pending requests are cleaned up. Time to
> > > > > > > > > > > +resume so
> > > > > > that disk
> > > > > > > > > > > + * deletion can be smooth. Start the HW queues so
> > > > > > > > > > > +that when queue
> > > > > > > > > > is
> > > > > > > > > > > + * unquiesced requests can again enter the driver.
> > > > > > > > > > > + */
> > > > > > > > > > > + blk_mq_start_stopped_hw_queues(q, true);
> > > > > > > > > > > +
> > > > > > > > > > > + /*
> > > > > > > > > > > + * Unquiescing will trigger dispatching any pending
> > > > > > > > > > > +requests
> > > > > > to the
> > > > > > > > > > > + * driver which has crossed bio_queue_enter() to the
> > driver.
> > > > > > > > > > > + */
> > > > > > > > > > > + blk_mq_unquiesce_queue(q);
> > > > > > > > > > > +
> > > > > > > > > > > + /*
> > > > > > > > > > > + * Wait for all pending dispatches to terminate
> > > > > > > > > > > +which may
> > > > > > have been
> > > > > > > > > > > + * initiated after unquiescing.
> > > > > > > > > > > + */
> > > > > > > > > > > + blk_mq_freeze_queue_wait(q);
> > > > > > > > > > > +
> > > > > > > > > > > + /*
> > > > > > > > > > > + * Mark the disk dead so that once we unfreeze the
> > > > > > > > > > > +queue,
> > > > > > requests
> > > > > > > > > > > + * waiting at the door of bio_queue_enter() can be
> > > > > > > > > > > +aborted right
> > > > > > > > > > away.
> > > > > > > > > > > + */
> > > > > > > > > > > + blk_mark_disk_dead(vblk->disk);
> > > > > > > > > > > +
> > > > > > > > > > > + /* Unfreeze the queue so that any waiting requests
> > > > > > > > > > > +will be
> > > > > > aborted. */
> > > > > > > > > > > + blk_mq_unfreeze_queue_nomemrestore(q);
> > > > > > > > > > > +}
> > > > > > > > > > > +
> > > > > > > > > > > static void virtblk_remove(struct virtio_device *vdev) {
> > > > > > > > > > > struct virtio_blk *vblk = vdev->priv; @@ -1561,6
> > > > > > > > > > > +1653,9 @@ static void virtblk_remove(struct virtio_device
> > *vdev)
> > > > > > > > > > > /* Make sure no work handler is accessing the device.
> > */
> > > > > > > > > > > flush_work(&vblk->config_work);
> > > > > > > > > > >
> > > > > > > > > > > + if (virtqueue_is_broken(vblk->vqs[0].vq))
> > > > > > > > > > > + virtblk_cleanup_broken_device(vblk);
> > > > > > > > > > > +
> > > > > > > > > > > del_gendisk(vblk->disk);
> > > > > > > > > > > blk_mq_free_tag_set(&vblk->tag_set);
> > > > > > > > > > >
> > > > > > > > > > > --
> > > > > > > > > > > 2.34.1
> > > > > > > > > > >
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal
2025-06-25 19:22 ` Michael S. Tsirkin
@ 2025-06-26 3:26 ` Parav Pandit
2025-06-26 6:04 ` Michael S. Tsirkin
0 siblings, 1 reply; 24+ messages in thread
From: Parav Pandit @ 2025-06-26 3:26 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Stefan Hajnoczi, axboe@kernel.dk, virtualization@lists.linux.dev,
linux-block@vger.kernel.org, stable@vger.kernel.org,
NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni,
xuanzhuo@linux.alibaba.com, pbonzini@redhat.com,
jasowang@redhat.com, alok.a.tiwari@oracle.com, Max Gurtovoy,
Israel Rukshin
> From: Michael S. Tsirkin <mst@redhat.com>
> Sent: 26 June 2025 12:52 AM
>
> On Wed, Jun 25, 2025 at 07:08:54PM +0000, Parav Pandit wrote:
> >
> > > From: Michael S. Tsirkin <mst@redhat.com>
> > > Sent: 25 June 2025 05:15 PM
> > >
> > > On Wed, Jun 25, 2025 at 11:08:42AM +0000, Parav Pandit wrote:
> > > >
> > > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > > Sent: 25 June 2025 04:34 PM
> > > > >
> > > > > On Wed, Jun 25, 2025 at 02:55:27AM +0000, Parav Pandit wrote:
> > > > > >
> > > > > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > > > > Sent: 25 June 2025 01:24 AM
> > > > > > >
> > > > > > > On Tue, Jun 24, 2025 at 07:11:29PM +0000, Parav Pandit wrote:
> > > > > > > >
> > > > > > > >
> > > > > > > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > > > > > > Sent: 25 June 2025 12:37 AM
> > > > > > > > >
> > > > > > > > > On Tue, Jun 24, 2025 at 07:01:44PM +0000, Parav Pandit wrote:
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > > From: Stefan Hajnoczi <stefanha@redhat.com>
> > > > > > > > > > > Sent: 25 June 2025 12:26 AM
> > > > > > > > > > >
> > > > > > > > > > > On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav
> > > > > > > > > > > Pandit
> > > wrote:
> > > > > > > > > > > > When the PCI device is surprise removed, requests
> > > > > > > > > > > > may not complete the device as the VQ is marked as
> broken.
> > > > > > > > > > > > Due to this, the disk deletion hangs.
> > > > > > > > > > >
> > > > > > > > > > > There are loops in the core virtio driver code that
> > > > > > > > > > > expect device register reads to eventually return 0:
> > > > > > > > > > > drivers/virtio/virtio_pci_modern.c:vp_reset()
> > > > > > > > > > > drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set
> > > > > > > > > > > _que
> > > > > > > > > > > ue_r
> > > > > > > > > > > eset
> > > > > > > > > > > ()
> > > > > > > > > > >
> > > > > > > > > > > Is there a hang if these loops are hit when a device
> > > > > > > > > > > has been surprise removed? I'm trying to understand
> > > > > > > > > > > whether surprise removal is fully supported or
> > > > > > > > > > > whether this patch is one step in that
> > > > > > > direction.
> > > > > > > > > > >
> > > > > > > > > > In one of the previous replies I answered to Michael,
> > > > > > > > > > but don't have the link
> > > > > > > > > handy.
> > > > > > > > > > It is not fully supported by this patch. It will hang.
> > > > > > > > > >
> > > > > > > > > > This patch restores driver back to the same state what
> > > > > > > > > > it was before the fixes
> > > > > > > > > tag patch.
> > > > > > > > > > The virtio stack level work is needed to support
> > > > > > > > > > surprise removal, including
> > > > > > > > > the reset flow you rightly pointed.
> > > > > > > > >
> > > > > > > > > Have plans to do that?
> > > > > > > > >
> > > > > > > > Didn't give enough thoughts on it yet.
> > > > > > >
> > > > > > > This one is kind of pointless then? It just fixes the
> > > > > > > specific race window that your test harness happens to hit?
> > > > > > >
> > > > > > It was reported by Li from Baidu, whose tests failed.
> > > > > > I missed to tag "reported-by" in v5. I had it until v4. :(
> > > > > >
> > > > > > > Maybe it's better to wait until someone does a comprehensive fix..
> > > > > > >
> > > > > > >
> > > > > > Oh, I was under impression is that you wanted to step forward
> > > > > > in discussion
> > > > > of v4.
> > > > > > If you prefer a comprehensive support across layers of virtio,
> > > > > > I suggest you
> > > > > should revert the cited patch in fixes tag.
> > > > > >
> > > > > > Otherwise, it is in degraded state as virtio never supported
> > > > > > surprise
> > > removal.
> > > > > > By reverting the cited patch (or with this fix), the requests
> > > > > > and disk deletion
> > > > > will not hang.
> > > > >
> > > > > But they will hung in virtio core on reset, will they not? The
> > > > > tests just do not happen to trigger this?
> > > > >
> > > > It will hang if it a true surprise removal which no device did so
> > > > far because it
> > > never worked.
> > > > (or did, but always hung that no one reported yet)
> > > >
> > > > I am familiar with 2 or more PCI devices who reports surprise
> > > > removal,
> > > which do not complete the requests but yet allows device reset flow.
> > > > This is because device is still there on the PCI bus. Only via
> > > > side band signals
> > > device removal was reported.
> > >
> > > So why do we care about it so much? I think it's great this patch
> > > exists, for example it makes it easier to test surprise removal and
> > > find more bugs. But is it better to just have it hang
> > > unconditionally? Are we now making a commitment that it's working -
> one we don't seem to intend to implement?
> > >
> > The patch improves the situation from its current state.
> > But as you posted, more changes in pci layer are needed.
> > I didn't audit where else it may be needed.
> >
> > vp_reset() may need to return the status back of successful/failure reset.
> > Otherwise during probe(), vp_reset() aborts the reset and attempts to load
> the driver for removed device.
>
> yes however this is not at all different that hotunplug right after reset.
>
For hotunplug after reset, we likely need a timeout handler.
Because block driver running inside the remove() callback waiting for the IO, may not get notified from driver core to synchronize ongoing remove().
> > I guess suspend() callback also infinitely waits during freezing the queue
> also needs adaptation.
>
> Which callback is that I don't understand.
virtblk_freeze() at [1].
[1] https://elixir.bootlin.com/linux/v6.15.3/source/drivers/block/virtio_blk.c#L1622
>
>
> > > > But I agree that for full support, virtio all layer changes would
> > > > be needed as
> > > new functionality (without fixes tag :) ).
> > >
> > >
> > > Or with a fixes tag - lots of people just use it as a signal to mean
> > > "where can this be reasonably backported to".
> > >
> > Yes, I think the fix for the older kernels is needed, hence I cced stable too.
> >
> > >
> > > > > > Please let me know if I should re-send to revert the patch
> > > > > > listed in fixes
> > > tag.
> > > > > >
> > > > > > > > > > > Apart from that, I'm happy with the virtio_blk.c
> > > > > > > > > > > aspects of the
> > > > > patch:
> > > > > > > > > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> > > > > > > > > > >
> > > > > > > > > > Thanks.
> > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > Fix it by aborting the requests when the VQ is broken.
> > > > > > > > > > > >
> > > > > > > > > > > > With this fix now fio completes swiftly.
> > > > > > > > > > > > An alternative of IO timeout has been considered,
> > > > > > > > > > > > however when the driver knows about unresponsive
> > > > > > > > > > > > block device, swiftly clearing them enables users
> > > > > > > > > > > > and upper layers to react
> > > > > quickly.
> > > > > > > > > > > >
> > > > > > > > > > > > Verified with multiple device unplug iterations
> > > > > > > > > > > > with pending requests in virtio used ring and some
> > > > > > > > > > > > pending with the
> > > > > device.
> > > > > > > > > > > >
> > > > > > > > > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise
> > > > > > > > > > > > removal of virtio pci device")
> > > > > > > > > > > > Cc: stable@vger.kernel.org
> > > > > > > > > > > > Reported-by: Li RongQing <lirongqing@baidu.com>
> > > > > > > > > > > > Closes:
> > > > > > > > > > > > https://lore.kernel.org/virtualization/c45dd68698c
> > > > > > > > > > > > d472
> > > > > > > > > > > > 38c5
> > > > > > > > > > > > 5fb7
> > > > > > > > > > > > 3ca9
> > > > > > > > > > > > b474
> > > > > > > > > > > > 1@baidu.com/
> > > > > > > > > > > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> > > > > > > > > > > > Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> > > > > > > > > > > > Signed-off-by: Parav Pandit <parav@nvidia.com>
> > > > > > > > > > > >
> > > > > > > > > > > > ---
> > > > > > > > > > > > v4->v5:
> > > > > > > > > > > > - fixed comment style where comment to start with
> > > > > > > > > > > > one empty line at start
> > > > > > > > > > > > - Addressed comments from Alok
> > > > > > > > > > > > - fixed typo in broken vq check
> > > > > > > > > > > > v3->v4:
> > > > > > > > > > > > - Addressed comments from Michael
> > > > > > > > > > > > - renamed virtblk_request_cancel() to
> > > > > > > > > > > > virtblk_complete_request_with_ioerr()
> > > > > > > > > > > > - Added comments for
> > > > > > > > > > > > virtblk_complete_request_with_ioerr()
> > > > > > > > > > > > - Renamed virtblk_broken_device_cleanup() to
> > > > > > > > > > > > virtblk_cleanup_broken_device()
> > > > > > > > > > > > - Added comments for
> > > > > > > > > > > > virtblk_cleanup_broken_device()
> > > > > > > > > > > > - Moved the broken vq check in virtblk_remove()
> > > > > > > > > > > > - Fixed comment style to have first empty line
> > > > > > > > > > > > - replaced freezed to frozen
> > > > > > > > > > > > - Fixed comments rephrased
> > > > > > > > > > > >
> > > > > > > > > > > > v2->v3:
> > > > > > > > > > > > - Addressed comments from Michael
> > > > > > > > > > > > - updated comment for synchronizing with callbacks
> > > > > > > > > > > >
> > > > > > > > > > > > v1->v2:
> > > > > > > > > > > > - Addressed comments from Stephan
> > > > > > > > > > > > - fixed spelling to 'waiting'
> > > > > > > > > > > > - Addressed comments from Michael
> > > > > > > > > > > > - Dropped checking broken vq from queue_rq() and
> > > queue_rqs()
> > > > > > > > > > > > because it is checked in lower layer routines in
> > > > > > > > > > > > virtio core
> > > > > > > > > > > >
> > > > > > > > > > > > v0->v1:
> > > > > > > > > > > > - Fixed comments from Stefan to rename a cleanup
> > > > > > > > > > > > function
> > > > > > > > > > > > - Improved logic for handling any outstanding requests
> > > > > > > > > > > > in bio layer
> > > > > > > > > > > > - improved cancel callback to sync with ongoing
> > > > > > > > > > > > done()
> > > > > > > > > > > > ---
> > > > > > > > > > > > drivers/block/virtio_blk.c | 95
> > > > > > > > > > > > ++++++++++++++++++++++++++++++++++++++
> > > > > > > > > > > > 1 file changed, 95 insertions(+)
> > > > > > > > > > > >
> > > > > > > > > > > > diff --git a/drivers/block/virtio_blk.c
> > > > > > > > > > > > b/drivers/block/virtio_blk.c index
> > > > > > > > > > > > 7cffea01d868..c5e383c0ac48
> > > > > > > > > > > > 100644
> > > > > > > > > > > > --- a/drivers/block/virtio_blk.c
> > > > > > > > > > > > +++ b/drivers/block/virtio_blk.c
> > > > > > > > > > > > @@ -1554,6 +1554,98 @@ static int
> > > > > > > > > > > > virtblk_probe(struct virtio_device
> > > > > > > > > > > *vdev)
> > > > > > > > > > > > return err;
> > > > > > > > > > > > }
> > > > > > > > > > > >
> > > > > > > > > > > > +/*
> > > > > > > > > > > > + * If the vq is broken, device will not complete requests.
> > > > > > > > > > > > + * So we do it for the device.
> > > > > > > > > > > > + */
> > > > > > > > > > > > +static bool
> > > > > > > > > > > > +virtblk_complete_request_with_ioerr(struct
> > > > > > > > > > > > +request *rq, void *data) {
> > > > > > > > > > > > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
> > > > > > > > > > > > + struct virtio_blk *vblk = data;
> > > > > > > > > > > > + struct virtio_blk_vq *vq;
> > > > > > > > > > > > + unsigned long flags;
> > > > > > > > > > > > +
> > > > > > > > > > > > + vq = &vblk->vqs[rq->mq_hctx->queue_num];
> > > > > > > > > > > > +
> > > > > > > > > > > > + spin_lock_irqsave(&vq->lock, flags);
> > > > > > > > > > > > +
> > > > > > > > > > > > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
> > > > > > > > > > > > + if (blk_mq_request_started(rq) &&
> > > > > > > !blk_mq_request_completed(rq))
> > > > > > > > > > > > + blk_mq_complete_request(rq);
> > > > > > > > > > > > +
> > > > > > > > > > > > + spin_unlock_irqrestore(&vq->lock, flags);
> > > > > > > > > > > > + return true;
> > > > > > > > > > > > +}
> > > > > > > > > > > > +
> > > > > > > > > > > > +/*
> > > > > > > > > > > > + * If the device is broken, it will not use any
> > > > > > > > > > > > +buffers and waiting
> > > > > > > > > > > > + * for that to happen is pointless. We'll do the
> > > > > > > > > > > > +cleanup in the driver,
> > > > > > > > > > > > + * completing all requests for the device.
> > > > > > > > > > > > + */
> > > > > > > > > > > > +static void virtblk_cleanup_broken_device(struct
> > > > > > > > > > > > +virtio_blk *vblk)
> > > > > {
> > > > > > > > > > > > + struct request_queue *q = vblk->disk->queue;
> > > > > > > > > > > > +
> > > > > > > > > > > > + /*
> > > > > > > > > > > > + * Start freezing the queue, so that new
> > > > > > > > > > > > +requests keeps
> > > > > > > waiting at the
> > > > > > > > > > > > + * door of bio_queue_enter(). We cannot fully
> > > > > > > > > > > > +freeze the queue
> > > > > > > > > > > because
> > > > > > > > > > > > + * frozen queue is an empty queue and there are
> > > > > > > > > > > > +pending
> > > > > > > requests, so
> > > > > > > > > > > > + * only start freezing it.
> > > > > > > > > > > > + */
> > > > > > > > > > > > + blk_freeze_queue_start(q);
> > > > > > > > > > > > +
> > > > > > > > > > > > + /*
> > > > > > > > > > > > + * When quiescing completes, all ongoing
> > > > > > > > > > > > +dispatches have
> > > > > > > completed
> > > > > > > > > > > > + * and no new dispatch will happen towards the
> > > driver.
> > > > > > > > > > > > + */
> > > > > > > > > > > > + blk_mq_quiesce_queue(q);
> > > > > > > > > > > > +
> > > > > > > > > > > > + /*
> > > > > > > > > > > > + * Synchronize with any ongoing VQ callbacks
> > > > > > > > > > > > +that may have
> > > > > > > started
> > > > > > > > > > > > + * before the VQs were marked as broken. Any
> > > > > > > > > > > > +outstanding
> > > > > > > requests
> > > > > > > > > > > > + * will be completed by
> > > > > > > virtblk_complete_request_with_ioerr().
> > > > > > > > > > > > + */
> > > > > > > > > > > > + virtio_synchronize_cbs(vblk->vdev);
> > > > > > > > > > > > +
> > > > > > > > > > > > + /*
> > > > > > > > > > > > + * At this point, no new requests can enter the
> > > > > > > > > > > > +queue_rq()
> > > > > > > and
> > > > > > > > > > > > + * completion routine will not complete any new
> > > > > > > > > > > > +requests either for
> > > > > > > > > > > the
> > > > > > > > > > > > + * broken vq. Hence, it is safe to cancel all
> > > > > > > > > > > > +requests
> > > which are
> > > > > > > > > > > > + * started.
> > > > > > > > > > > > + */
> > > > > > > > > > > > + blk_mq_tagset_busy_iter(&vblk->tag_set,
> > > > > > > > > > > > +
> > > virtblk_complete_request_with_ioerr,
> > > > > > > vblk);
> > > > > > > > > > > > +
> > > > > > > > > > > > +blk_mq_tagset_wait_completed_request(&vblk->tag_s
> > > > > > > > > > > > +et);
> > > > > > > > > > > > +
> > > > > > > > > > > > + /*
> > > > > > > > > > > > + * All pending requests are cleaned up. Time to
> > > > > > > > > > > > +resume so
> > > > > > > that disk
> > > > > > > > > > > > + * deletion can be smooth. Start the HW queues
> > > > > > > > > > > > +so that when queue
> > > > > > > > > > > is
> > > > > > > > > > > > + * unquiesced requests can again enter the driver.
> > > > > > > > > > > > + */
> > > > > > > > > > > > + blk_mq_start_stopped_hw_queues(q, true);
> > > > > > > > > > > > +
> > > > > > > > > > > > + /*
> > > > > > > > > > > > + * Unquiescing will trigger dispatching any
> > > > > > > > > > > > +pending requests
> > > > > > > to the
> > > > > > > > > > > > + * driver which has crossed bio_queue_enter() to
> > > > > > > > > > > > +the
> > > driver.
> > > > > > > > > > > > + */
> > > > > > > > > > > > + blk_mq_unquiesce_queue(q);
> > > > > > > > > > > > +
> > > > > > > > > > > > + /*
> > > > > > > > > > > > + * Wait for all pending dispatches to terminate
> > > > > > > > > > > > +which may
> > > > > > > have been
> > > > > > > > > > > > + * initiated after unquiescing.
> > > > > > > > > > > > + */
> > > > > > > > > > > > + blk_mq_freeze_queue_wait(q);
> > > > > > > > > > > > +
> > > > > > > > > > > > + /*
> > > > > > > > > > > > + * Mark the disk dead so that once we unfreeze
> > > > > > > > > > > > +the queue,
> > > > > > > requests
> > > > > > > > > > > > + * waiting at the door of bio_queue_enter() can
> > > > > > > > > > > > +be aborted right
> > > > > > > > > > > away.
> > > > > > > > > > > > + */
> > > > > > > > > > > > + blk_mark_disk_dead(vblk->disk);
> > > > > > > > > > > > +
> > > > > > > > > > > > + /* Unfreeze the queue so that any waiting
> > > > > > > > > > > > +requests will be
> > > > > > > aborted. */
> > > > > > > > > > > > + blk_mq_unfreeze_queue_nomemrestore(q);
> > > > > > > > > > > > +}
> > > > > > > > > > > > +
> > > > > > > > > > > > static void virtblk_remove(struct virtio_device *vdev) {
> > > > > > > > > > > > struct virtio_blk *vblk = vdev->priv; @@ -1561,6
> > > > > > > > > > > > +1653,9 @@ static void virtblk_remove(struct
> > > > > > > > > > > > +virtio_device
> > > *vdev)
> > > > > > > > > > > > /* Make sure no work handler is accessing the device.
> > > */
> > > > > > > > > > > > flush_work(&vblk->config_work);
> > > > > > > > > > > >
> > > > > > > > > > > > + if (virtqueue_is_broken(vblk->vqs[0].vq))
> > > > > > > > > > > > + virtblk_cleanup_broken_device(vblk);
> > > > > > > > > > > > +
> > > > > > > > > > > > del_gendisk(vblk->disk);
> > > > > > > > > > > > blk_mq_free_tag_set(&vblk->tag_set);
> > > > > > > > > > > >
> > > > > > > > > > > > --
> > > > > > > > > > > > 2.34.1
> > > > > > > > > > > >
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal
2025-06-26 3:26 ` Parav Pandit
@ 2025-06-26 6:04 ` Michael S. Tsirkin
2025-06-26 6:29 ` Parav Pandit
0 siblings, 1 reply; 24+ messages in thread
From: Michael S. Tsirkin @ 2025-06-26 6:04 UTC (permalink / raw)
To: Parav Pandit
Cc: Stefan Hajnoczi, axboe@kernel.dk, virtualization@lists.linux.dev,
linux-block@vger.kernel.org, stable@vger.kernel.org,
NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni,
xuanzhuo@linux.alibaba.com, pbonzini@redhat.com,
jasowang@redhat.com, alok.a.tiwari@oracle.com, Max Gurtovoy,
Israel Rukshin
On Thu, Jun 26, 2025 at 03:26:12AM +0000, Parav Pandit wrote:
>
>
> > From: Michael S. Tsirkin <mst@redhat.com>
> > Sent: 26 June 2025 12:52 AM
> >
> > On Wed, Jun 25, 2025 at 07:08:54PM +0000, Parav Pandit wrote:
> > >
> > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > Sent: 25 June 2025 05:15 PM
> > > >
> > > > On Wed, Jun 25, 2025 at 11:08:42AM +0000, Parav Pandit wrote:
> > > > >
> > > > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > > > Sent: 25 June 2025 04:34 PM
> > > > > >
> > > > > > On Wed, Jun 25, 2025 at 02:55:27AM +0000, Parav Pandit wrote:
> > > > > > >
> > > > > > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > > > > > Sent: 25 June 2025 01:24 AM
> > > > > > > >
> > > > > > > > On Tue, Jun 24, 2025 at 07:11:29PM +0000, Parav Pandit wrote:
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > > > > > > > Sent: 25 June 2025 12:37 AM
> > > > > > > > > >
> > > > > > > > > > On Tue, Jun 24, 2025 at 07:01:44PM +0000, Parav Pandit wrote:
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > > From: Stefan Hajnoczi <stefanha@redhat.com>
> > > > > > > > > > > > Sent: 25 June 2025 12:26 AM
> > > > > > > > > > > >
> > > > > > > > > > > > On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav
> > > > > > > > > > > > Pandit
> > > > wrote:
> > > > > > > > > > > > > When the PCI device is surprise removed, requests
> > > > > > > > > > > > > may not complete the device as the VQ is marked as
> > broken.
> > > > > > > > > > > > > Due to this, the disk deletion hangs.
> > > > > > > > > > > >
> > > > > > > > > > > > There are loops in the core virtio driver code that
> > > > > > > > > > > > expect device register reads to eventually return 0:
> > > > > > > > > > > > drivers/virtio/virtio_pci_modern.c:vp_reset()
> > > > > > > > > > > > drivers/virtio/virtio_pci_modern_dev.c:vp_modern_set
> > > > > > > > > > > > _que
> > > > > > > > > > > > ue_r
> > > > > > > > > > > > eset
> > > > > > > > > > > > ()
> > > > > > > > > > > >
> > > > > > > > > > > > Is there a hang if these loops are hit when a device
> > > > > > > > > > > > has been surprise removed? I'm trying to understand
> > > > > > > > > > > > whether surprise removal is fully supported or
> > > > > > > > > > > > whether this patch is one step in that
> > > > > > > > direction.
> > > > > > > > > > > >
> > > > > > > > > > > In one of the previous replies I answered to Michael,
> > > > > > > > > > > but don't have the link
> > > > > > > > > > handy.
> > > > > > > > > > > It is not fully supported by this patch. It will hang.
> > > > > > > > > > >
> > > > > > > > > > > This patch restores driver back to the same state what
> > > > > > > > > > > it was before the fixes
> > > > > > > > > > tag patch.
> > > > > > > > > > > The virtio stack level work is needed to support
> > > > > > > > > > > surprise removal, including
> > > > > > > > > > the reset flow you rightly pointed.
> > > > > > > > > >
> > > > > > > > > > Have plans to do that?
> > > > > > > > > >
> > > > > > > > > Didn't give enough thoughts on it yet.
> > > > > > > >
> > > > > > > > This one is kind of pointless then? It just fixes the
> > > > > > > > specific race window that your test harness happens to hit?
> > > > > > > >
> > > > > > > It was reported by Li from Baidu, whose tests failed.
> > > > > > > I missed to tag "reported-by" in v5. I had it until v4. :(
> > > > > > >
> > > > > > > > Maybe it's better to wait until someone does a comprehensive fix..
> > > > > > > >
> > > > > > > >
> > > > > > > Oh, I was under impression is that you wanted to step forward
> > > > > > > in discussion
> > > > > > of v4.
> > > > > > > If you prefer a comprehensive support across layers of virtio,
> > > > > > > I suggest you
> > > > > > should revert the cited patch in fixes tag.
> > > > > > >
> > > > > > > Otherwise, it is in degraded state as virtio never supported
> > > > > > > surprise
> > > > removal.
> > > > > > > By reverting the cited patch (or with this fix), the requests
> > > > > > > and disk deletion
> > > > > > will not hang.
> > > > > >
> > > > > > But they will hung in virtio core on reset, will they not? The
> > > > > > tests just do not happen to trigger this?
> > > > > >
> > > > > It will hang if it a true surprise removal which no device did so
> > > > > far because it
> > > > never worked.
> > > > > (or did, but always hung that no one reported yet)
> > > > >
> > > > > I am familiar with 2 or more PCI devices who reports surprise
> > > > > removal,
> > > > which do not complete the requests but yet allows device reset flow.
> > > > > This is because device is still there on the PCI bus. Only via
> > > > > side band signals
> > > > device removal was reported.
> > > >
> > > > So why do we care about it so much? I think it's great this patch
> > > > exists, for example it makes it easier to test surprise removal and
> > > > find more bugs. But is it better to just have it hang
> > > > unconditionally? Are we now making a commitment that it's working -
> > one we don't seem to intend to implement?
> > > >
> > > The patch improves the situation from its current state.
> > > But as you posted, more changes in pci layer are needed.
> > > I didn't audit where else it may be needed.
> > >
> > > vp_reset() may need to return the status back of successful/failure reset.
> > > Otherwise during probe(), vp_reset() aborts the reset and attempts to load
> > the driver for removed device.
> >
> > yes however this is not at all different that hotunplug right after reset.
> >
> For hotunplug after reset, we likely need a timeout handler.
> Because block driver running inside the remove() callback waiting for the IO, may not get notified from driver core to synchronize ongoing remove().
Notified of what? So is the scenario that graceful remove starts,
and meanwhile a surprise removal happens?
>
> > > I guess suspend() callback also infinitely waits during freezing the queue
> > also needs adaptation.
> >
> > Which callback is that I don't understand.
> virtblk_freeze() at [1].
>
> [1] https://elixir.bootlin.com/linux/v6.15.3/source/drivers/block/virtio_blk.c#L1622
>
> >
> >
> > > > > But I agree that for full support, virtio all layer changes would
> > > > > be needed as
> > > > new functionality (without fixes tag :) ).
> > > >
> > > >
> > > > Or with a fixes tag - lots of people just use it as a signal to mean
> > > > "where can this be reasonably backported to".
> > > >
> > > Yes, I think the fix for the older kernels is needed, hence I cced stable too.
> > >
> > > >
> > > > > > > Please let me know if I should re-send to revert the patch
> > > > > > > listed in fixes
> > > > tag.
> > > > > > >
> > > > > > > > > > > > Apart from that, I'm happy with the virtio_blk.c
> > > > > > > > > > > > aspects of the
> > > > > > patch:
> > > > > > > > > > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> > > > > > > > > > > >
> > > > > > > > > > > Thanks.
> > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > Fix it by aborting the requests when the VQ is broken.
> > > > > > > > > > > > >
> > > > > > > > > > > > > With this fix now fio completes swiftly.
> > > > > > > > > > > > > An alternative of IO timeout has been considered,
> > > > > > > > > > > > > however when the driver knows about unresponsive
> > > > > > > > > > > > > block device, swiftly clearing them enables users
> > > > > > > > > > > > > and upper layers to react
> > > > > > quickly.
> > > > > > > > > > > > >
> > > > > > > > > > > > > Verified with multiple device unplug iterations
> > > > > > > > > > > > > with pending requests in virtio used ring and some
> > > > > > > > > > > > > pending with the
> > > > > > device.
> > > > > > > > > > > > >
> > > > > > > > > > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise
> > > > > > > > > > > > > removal of virtio pci device")
> > > > > > > > > > > > > Cc: stable@vger.kernel.org
> > > > > > > > > > > > > Reported-by: Li RongQing <lirongqing@baidu.com>
> > > > > > > > > > > > > Closes:
> > > > > > > > > > > > > https://lore.kernel.org/virtualization/c45dd68698c
> > > > > > > > > > > > > d472
> > > > > > > > > > > > > 38c5
> > > > > > > > > > > > > 5fb7
> > > > > > > > > > > > > 3ca9
> > > > > > > > > > > > > b474
> > > > > > > > > > > > > 1@baidu.com/
> > > > > > > > > > > > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> > > > > > > > > > > > > Reviewed-by: Israel Rukshin <israelr@nvidia.com>
> > > > > > > > > > > > > Signed-off-by: Parav Pandit <parav@nvidia.com>
> > > > > > > > > > > > >
> > > > > > > > > > > > > ---
> > > > > > > > > > > > > v4->v5:
> > > > > > > > > > > > > - fixed comment style where comment to start with
> > > > > > > > > > > > > one empty line at start
> > > > > > > > > > > > > - Addressed comments from Alok
> > > > > > > > > > > > > - fixed typo in broken vq check
> > > > > > > > > > > > > v3->v4:
> > > > > > > > > > > > > - Addressed comments from Michael
> > > > > > > > > > > > > - renamed virtblk_request_cancel() to
> > > > > > > > > > > > > virtblk_complete_request_with_ioerr()
> > > > > > > > > > > > > - Added comments for
> > > > > > > > > > > > > virtblk_complete_request_with_ioerr()
> > > > > > > > > > > > > - Renamed virtblk_broken_device_cleanup() to
> > > > > > > > > > > > > virtblk_cleanup_broken_device()
> > > > > > > > > > > > > - Added comments for
> > > > > > > > > > > > > virtblk_cleanup_broken_device()
> > > > > > > > > > > > > - Moved the broken vq check in virtblk_remove()
> > > > > > > > > > > > > - Fixed comment style to have first empty line
> > > > > > > > > > > > > - replaced freezed to frozen
> > > > > > > > > > > > > - Fixed comments rephrased
> > > > > > > > > > > > >
> > > > > > > > > > > > > v2->v3:
> > > > > > > > > > > > > - Addressed comments from Michael
> > > > > > > > > > > > > - updated comment for synchronizing with callbacks
> > > > > > > > > > > > >
> > > > > > > > > > > > > v1->v2:
> > > > > > > > > > > > > - Addressed comments from Stephan
> > > > > > > > > > > > > - fixed spelling to 'waiting'
> > > > > > > > > > > > > - Addressed comments from Michael
> > > > > > > > > > > > > - Dropped checking broken vq from queue_rq() and
> > > > queue_rqs()
> > > > > > > > > > > > > because it is checked in lower layer routines in
> > > > > > > > > > > > > virtio core
> > > > > > > > > > > > >
> > > > > > > > > > > > > v0->v1:
> > > > > > > > > > > > > - Fixed comments from Stefan to rename a cleanup
> > > > > > > > > > > > > function
> > > > > > > > > > > > > - Improved logic for handling any outstanding requests
> > > > > > > > > > > > > in bio layer
> > > > > > > > > > > > > - improved cancel callback to sync with ongoing
> > > > > > > > > > > > > done()
> > > > > > > > > > > > > ---
> > > > > > > > > > > > > drivers/block/virtio_blk.c | 95
> > > > > > > > > > > > > ++++++++++++++++++++++++++++++++++++++
> > > > > > > > > > > > > 1 file changed, 95 insertions(+)
> > > > > > > > > > > > >
> > > > > > > > > > > > > diff --git a/drivers/block/virtio_blk.c
> > > > > > > > > > > > > b/drivers/block/virtio_blk.c index
> > > > > > > > > > > > > 7cffea01d868..c5e383c0ac48
> > > > > > > > > > > > > 100644
> > > > > > > > > > > > > --- a/drivers/block/virtio_blk.c
> > > > > > > > > > > > > +++ b/drivers/block/virtio_blk.c
> > > > > > > > > > > > > @@ -1554,6 +1554,98 @@ static int
> > > > > > > > > > > > > virtblk_probe(struct virtio_device
> > > > > > > > > > > > *vdev)
> > > > > > > > > > > > > return err;
> > > > > > > > > > > > > }
> > > > > > > > > > > > >
> > > > > > > > > > > > > +/*
> > > > > > > > > > > > > + * If the vq is broken, device will not complete requests.
> > > > > > > > > > > > > + * So we do it for the device.
> > > > > > > > > > > > > + */
> > > > > > > > > > > > > +static bool
> > > > > > > > > > > > > +virtblk_complete_request_with_ioerr(struct
> > > > > > > > > > > > > +request *rq, void *data) {
> > > > > > > > > > > > > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
> > > > > > > > > > > > > + struct virtio_blk *vblk = data;
> > > > > > > > > > > > > + struct virtio_blk_vq *vq;
> > > > > > > > > > > > > + unsigned long flags;
> > > > > > > > > > > > > +
> > > > > > > > > > > > > + vq = &vblk->vqs[rq->mq_hctx->queue_num];
> > > > > > > > > > > > > +
> > > > > > > > > > > > > + spin_lock_irqsave(&vq->lock, flags);
> > > > > > > > > > > > > +
> > > > > > > > > > > > > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
> > > > > > > > > > > > > + if (blk_mq_request_started(rq) &&
> > > > > > > > !blk_mq_request_completed(rq))
> > > > > > > > > > > > > + blk_mq_complete_request(rq);
> > > > > > > > > > > > > +
> > > > > > > > > > > > > + spin_unlock_irqrestore(&vq->lock, flags);
> > > > > > > > > > > > > + return true;
> > > > > > > > > > > > > +}
> > > > > > > > > > > > > +
> > > > > > > > > > > > > +/*
> > > > > > > > > > > > > + * If the device is broken, it will not use any
> > > > > > > > > > > > > +buffers and waiting
> > > > > > > > > > > > > + * for that to happen is pointless. We'll do the
> > > > > > > > > > > > > +cleanup in the driver,
> > > > > > > > > > > > > + * completing all requests for the device.
> > > > > > > > > > > > > + */
> > > > > > > > > > > > > +static void virtblk_cleanup_broken_device(struct
> > > > > > > > > > > > > +virtio_blk *vblk)
> > > > > > {
> > > > > > > > > > > > > + struct request_queue *q = vblk->disk->queue;
> > > > > > > > > > > > > +
> > > > > > > > > > > > > + /*
> > > > > > > > > > > > > + * Start freezing the queue, so that new
> > > > > > > > > > > > > +requests keeps
> > > > > > > > waiting at the
> > > > > > > > > > > > > + * door of bio_queue_enter(). We cannot fully
> > > > > > > > > > > > > +freeze the queue
> > > > > > > > > > > > because
> > > > > > > > > > > > > + * frozen queue is an empty queue and there are
> > > > > > > > > > > > > +pending
> > > > > > > > requests, so
> > > > > > > > > > > > > + * only start freezing it.
> > > > > > > > > > > > > + */
> > > > > > > > > > > > > + blk_freeze_queue_start(q);
> > > > > > > > > > > > > +
> > > > > > > > > > > > > + /*
> > > > > > > > > > > > > + * When quiescing completes, all ongoing
> > > > > > > > > > > > > +dispatches have
> > > > > > > > completed
> > > > > > > > > > > > > + * and no new dispatch will happen towards the
> > > > driver.
> > > > > > > > > > > > > + */
> > > > > > > > > > > > > + blk_mq_quiesce_queue(q);
> > > > > > > > > > > > > +
> > > > > > > > > > > > > + /*
> > > > > > > > > > > > > + * Synchronize with any ongoing VQ callbacks
> > > > > > > > > > > > > +that may have
> > > > > > > > started
> > > > > > > > > > > > > + * before the VQs were marked as broken. Any
> > > > > > > > > > > > > +outstanding
> > > > > > > > requests
> > > > > > > > > > > > > + * will be completed by
> > > > > > > > virtblk_complete_request_with_ioerr().
> > > > > > > > > > > > > + */
> > > > > > > > > > > > > + virtio_synchronize_cbs(vblk->vdev);
> > > > > > > > > > > > > +
> > > > > > > > > > > > > + /*
> > > > > > > > > > > > > + * At this point, no new requests can enter the
> > > > > > > > > > > > > +queue_rq()
> > > > > > > > and
> > > > > > > > > > > > > + * completion routine will not complete any new
> > > > > > > > > > > > > +requests either for
> > > > > > > > > > > > the
> > > > > > > > > > > > > + * broken vq. Hence, it is safe to cancel all
> > > > > > > > > > > > > +requests
> > > > which are
> > > > > > > > > > > > > + * started.
> > > > > > > > > > > > > + */
> > > > > > > > > > > > > + blk_mq_tagset_busy_iter(&vblk->tag_set,
> > > > > > > > > > > > > +
> > > > virtblk_complete_request_with_ioerr,
> > > > > > > > vblk);
> > > > > > > > > > > > > +
> > > > > > > > > > > > > +blk_mq_tagset_wait_completed_request(&vblk->tag_s
> > > > > > > > > > > > > +et);
> > > > > > > > > > > > > +
> > > > > > > > > > > > > + /*
> > > > > > > > > > > > > + * All pending requests are cleaned up. Time to
> > > > > > > > > > > > > +resume so
> > > > > > > > that disk
> > > > > > > > > > > > > + * deletion can be smooth. Start the HW queues
> > > > > > > > > > > > > +so that when queue
> > > > > > > > > > > > is
> > > > > > > > > > > > > + * unquiesced requests can again enter the driver.
> > > > > > > > > > > > > + */
> > > > > > > > > > > > > + blk_mq_start_stopped_hw_queues(q, true);
> > > > > > > > > > > > > +
> > > > > > > > > > > > > + /*
> > > > > > > > > > > > > + * Unquiescing will trigger dispatching any
> > > > > > > > > > > > > +pending requests
> > > > > > > > to the
> > > > > > > > > > > > > + * driver which has crossed bio_queue_enter() to
> > > > > > > > > > > > > +the
> > > > driver.
> > > > > > > > > > > > > + */
> > > > > > > > > > > > > + blk_mq_unquiesce_queue(q);
> > > > > > > > > > > > > +
> > > > > > > > > > > > > + /*
> > > > > > > > > > > > > + * Wait for all pending dispatches to terminate
> > > > > > > > > > > > > +which may
> > > > > > > > have been
> > > > > > > > > > > > > + * initiated after unquiescing.
> > > > > > > > > > > > > + */
> > > > > > > > > > > > > + blk_mq_freeze_queue_wait(q);
> > > > > > > > > > > > > +
> > > > > > > > > > > > > + /*
> > > > > > > > > > > > > + * Mark the disk dead so that once we unfreeze
> > > > > > > > > > > > > +the queue,
> > > > > > > > requests
> > > > > > > > > > > > > + * waiting at the door of bio_queue_enter() can
> > > > > > > > > > > > > +be aborted right
> > > > > > > > > > > > away.
> > > > > > > > > > > > > + */
> > > > > > > > > > > > > + blk_mark_disk_dead(vblk->disk);
> > > > > > > > > > > > > +
> > > > > > > > > > > > > + /* Unfreeze the queue so that any waiting
> > > > > > > > > > > > > +requests will be
> > > > > > > > aborted. */
> > > > > > > > > > > > > + blk_mq_unfreeze_queue_nomemrestore(q);
> > > > > > > > > > > > > +}
> > > > > > > > > > > > > +
> > > > > > > > > > > > > static void virtblk_remove(struct virtio_device *vdev) {
> > > > > > > > > > > > > struct virtio_blk *vblk = vdev->priv; @@ -1561,6
> > > > > > > > > > > > > +1653,9 @@ static void virtblk_remove(struct
> > > > > > > > > > > > > +virtio_device
> > > > *vdev)
> > > > > > > > > > > > > /* Make sure no work handler is accessing the device.
> > > > */
> > > > > > > > > > > > > flush_work(&vblk->config_work);
> > > > > > > > > > > > >
> > > > > > > > > > > > > + if (virtqueue_is_broken(vblk->vqs[0].vq))
> > > > > > > > > > > > > + virtblk_cleanup_broken_device(vblk);
> > > > > > > > > > > > > +
> > > > > > > > > > > > > del_gendisk(vblk->disk);
> > > > > > > > > > > > > blk_mq_free_tag_set(&vblk->tag_set);
> > > > > > > > > > > > >
> > > > > > > > > > > > > --
> > > > > > > > > > > > > 2.34.1
> > > > > > > > > > > > >
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal
2025-06-26 6:04 ` Michael S. Tsirkin
@ 2025-06-26 6:29 ` Parav Pandit
2025-06-26 6:33 ` Michael S. Tsirkin
0 siblings, 1 reply; 24+ messages in thread
From: Parav Pandit @ 2025-06-26 6:29 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Stefan Hajnoczi, axboe@kernel.dk, virtualization@lists.linux.dev,
linux-block@vger.kernel.org, stable@vger.kernel.org,
NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni,
xuanzhuo@linux.alibaba.com, pbonzini@redhat.com,
jasowang@redhat.com, alok.a.tiwari@oracle.com, Max Gurtovoy,
Israel Rukshin
> From: Michael S. Tsirkin <mst@redhat.com>
> Sent: 26 June 2025 11:34 AM
>
> On Thu, Jun 26, 2025 at 03:26:12AM +0000, Parav Pandit wrote:
> >
> >
> > > From: Michael S. Tsirkin <mst@redhat.com>
> > > Sent: 26 June 2025 12:52 AM
> > >
> > > On Wed, Jun 25, 2025 at 07:08:54PM +0000, Parav Pandit wrote:
> > > >
> > > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > > Sent: 25 June 2025 05:15 PM
> > > > >
> > > > > On Wed, Jun 25, 2025 at 11:08:42AM +0000, Parav Pandit wrote:
> > > > > >
> > > > > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > > > > Sent: 25 June 2025 04:34 PM
> > > > > > >
> > > > > > > On Wed, Jun 25, 2025 at 02:55:27AM +0000, Parav Pandit wrote:
> > > > > > > >
> > > > > > > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > > > > > > Sent: 25 June 2025 01:24 AM
> > > > > > > > >
> > > > > > > > > On Tue, Jun 24, 2025 at 07:11:29PM +0000, Parav Pandit wrote:
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > > From: Michael S. Tsirkin <mst@redhat.com>
> > > > > > > > > > > Sent: 25 June 2025 12:37 AM
> > > > > > > > > > >
> > > > > > > > > > > On Tue, Jun 24, 2025 at 07:01:44PM +0000, Parav Pandit
> wrote:
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > > From: Stefan Hajnoczi <stefanha@redhat.com>
> > > > > > > > > > > > > Sent: 25 June 2025 12:26 AM
> > > > > > > > > > > > >
> > > > > > > > > > > > > On Mon, Jun 02, 2025 at 02:44:33AM +0000, Parav
> > > > > > > > > > > > > Pandit
> > > > > wrote:
> > > > > > > > > > > > > > When the PCI device is surprise removed,
> > > > > > > > > > > > > > requests may not complete the device as the VQ
> > > > > > > > > > > > > > is marked as
> > > broken.
> > > > > > > > > > > > > > Due to this, the disk deletion hangs.
> > > > > > > > > > > > >
> > > > > > > > > > > > > There are loops in the core virtio driver code
> > > > > > > > > > > > > that expect device register reads to eventually return 0:
> > > > > > > > > > > > > drivers/virtio/virtio_pci_modern.c:vp_reset()
> > > > > > > > > > > > > drivers/virtio/virtio_pci_modern_dev.c:vp_modern
> > > > > > > > > > > > > _set
> > > > > > > > > > > > > _que
> > > > > > > > > > > > > ue_r
> > > > > > > > > > > > > eset
> > > > > > > > > > > > > ()
> > > > > > > > > > > > >
> > > > > > > > > > > > > Is there a hang if these loops are hit when a
> > > > > > > > > > > > > device has been surprise removed? I'm trying to
> > > > > > > > > > > > > understand whether surprise removal is fully
> > > > > > > > > > > > > supported or whether this patch is one step in
> > > > > > > > > > > > > that
> > > > > > > > > direction.
> > > > > > > > > > > > >
> > > > > > > > > > > > In one of the previous replies I answered to
> > > > > > > > > > > > Michael, but don't have the link
> > > > > > > > > > > handy.
> > > > > > > > > > > > It is not fully supported by this patch. It will hang.
> > > > > > > > > > > >
> > > > > > > > > > > > This patch restores driver back to the same state
> > > > > > > > > > > > what it was before the fixes
> > > > > > > > > > > tag patch.
> > > > > > > > > > > > The virtio stack level work is needed to support
> > > > > > > > > > > > surprise removal, including
> > > > > > > > > > > the reset flow you rightly pointed.
> > > > > > > > > > >
> > > > > > > > > > > Have plans to do that?
> > > > > > > > > > >
> > > > > > > > > > Didn't give enough thoughts on it yet.
> > > > > > > > >
> > > > > > > > > This one is kind of pointless then? It just fixes the
> > > > > > > > > specific race window that your test harness happens to hit?
> > > > > > > > >
> > > > > > > > It was reported by Li from Baidu, whose tests failed.
> > > > > > > > I missed to tag "reported-by" in v5. I had it until v4. :(
> > > > > > > >
> > > > > > > > > Maybe it's better to wait until someone does a comprehensive
> fix..
> > > > > > > > >
> > > > > > > > >
> > > > > > > > Oh, I was under impression is that you wanted to step
> > > > > > > > forward in discussion
> > > > > > > of v4.
> > > > > > > > If you prefer a comprehensive support across layers of
> > > > > > > > virtio, I suggest you
> > > > > > > should revert the cited patch in fixes tag.
> > > > > > > >
> > > > > > > > Otherwise, it is in degraded state as virtio never
> > > > > > > > supported surprise
> > > > > removal.
> > > > > > > > By reverting the cited patch (or with this fix), the
> > > > > > > > requests and disk deletion
> > > > > > > will not hang.
> > > > > > >
> > > > > > > But they will hung in virtio core on reset, will they not?
> > > > > > > The tests just do not happen to trigger this?
> > > > > > >
> > > > > > It will hang if it a true surprise removal which no device did
> > > > > > so far because it
> > > > > never worked.
> > > > > > (or did, but always hung that no one reported yet)
> > > > > >
> > > > > > I am familiar with 2 or more PCI devices who reports surprise
> > > > > > removal,
> > > > > which do not complete the requests but yet allows device reset flow.
> > > > > > This is because device is still there on the PCI bus. Only via
> > > > > > side band signals
> > > > > device removal was reported.
> > > > >
> > > > > So why do we care about it so much? I think it's great this
> > > > > patch exists, for example it makes it easier to test surprise
> > > > > removal and find more bugs. But is it better to just have it
> > > > > hang unconditionally? Are we now making a commitment that it's
> > > > > working -
> > > one we don't seem to intend to implement?
> > > > >
> > > > The patch improves the situation from its current state.
> > > > But as you posted, more changes in pci layer are needed.
> > > > I didn't audit where else it may be needed.
> > > >
> > > > vp_reset() may need to return the status back of successful/failure reset.
> > > > Otherwise during probe(), vp_reset() aborts the reset and attempts
> > > > to load
> > > the driver for removed device.
> > >
> > > yes however this is not at all different that hotunplug right after reset.
> > >
> > For hotunplug after reset, we likely need a timeout handler.
> > Because block driver running inside the remove() callback waiting for the IO,
> may not get notified from driver core to synchronize ongoing remove().
>
>
> Notified of what?
Notification that surprise-removal occurred.
> So is the scenario that graceful remove starts, and
> meanwhile a surprise removal happens?
>
Right.
> >
> > > > I guess suspend() callback also infinitely waits during freezing
> > > > the queue
> > > also needs adaptation.
> > >
> > > Which callback is that I don't understand.
> > virtblk_freeze() at [1].
> >
> > [1]
> > https://elixir.bootlin.com/linux/v6.15.3/source/drivers/block/virtio_b
> > lk.c#L1622
> >
> > >
> > >
> > > > > > But I agree that for full support, virtio all layer changes
> > > > > > would be needed as
> > > > > new functionality (without fixes tag :) ).
> > > > >
> > > > >
> > > > > Or with a fixes tag - lots of people just use it as a signal to
> > > > > mean "where can this be reasonably backported to".
> > > > >
> > > > Yes, I think the fix for the older kernels is needed, hence I cced stable too.
> > > >
> > > > >
> > > > > > > > Please let me know if I should re-send to revert the patch
> > > > > > > > listed in fixes
> > > > > tag.
> > > > > > > >
> > > > > > > > > > > > > Apart from that, I'm happy with the virtio_blk.c
> > > > > > > > > > > > > aspects of the
> > > > > > > patch:
> > > > > > > > > > > > > Reviewed-by: Stefan Hajnoczi
> > > > > > > > > > > > > <stefanha@redhat.com>
> > > > > > > > > > > > >
> > > > > > > > > > > > Thanks.
> > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Fix it by aborting the requests when the VQ is broken.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > With this fix now fio completes swiftly.
> > > > > > > > > > > > > > An alternative of IO timeout has been
> > > > > > > > > > > > > > considered, however when the driver knows
> > > > > > > > > > > > > > about unresponsive block device, swiftly
> > > > > > > > > > > > > > clearing them enables users and upper layers
> > > > > > > > > > > > > > to react
> > > > > > > quickly.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Verified with multiple device unplug
> > > > > > > > > > > > > > iterations with pending requests in virtio
> > > > > > > > > > > > > > used ring and some pending with the
> > > > > > > device.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support
> > > > > > > > > > > > > > surprise removal of virtio pci device")
> > > > > > > > > > > > > > Cc: stable@vger.kernel.org
> > > > > > > > > > > > > > Reported-by: Li RongQing
> > > > > > > > > > > > > > <lirongqing@baidu.com>
> > > > > > > > > > > > > > Closes:
> > > > > > > > > > > > > > https://lore.kernel.org/virtualization/c45dd68
> > > > > > > > > > > > > > 698c
> > > > > > > > > > > > > > d472
> > > > > > > > > > > > > > 38c5
> > > > > > > > > > > > > > 5fb7
> > > > > > > > > > > > > > 3ca9
> > > > > > > > > > > > > > b474
> > > > > > > > > > > > > > 1@baidu.com/
> > > > > > > > > > > > > > Reviewed-by: Max Gurtovoy
> > > > > > > > > > > > > > <mgurtovoy@nvidia.com>
> > > > > > > > > > > > > > Reviewed-by: Israel Rukshin
> > > > > > > > > > > > > > <israelr@nvidia.com>
> > > > > > > > > > > > > > Signed-off-by: Parav Pandit <parav@nvidia.com>
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > ---
> > > > > > > > > > > > > > v4->v5:
> > > > > > > > > > > > > > - fixed comment style where comment to start
> > > > > > > > > > > > > > with one empty line at start
> > > > > > > > > > > > > > - Addressed comments from Alok
> > > > > > > > > > > > > > - fixed typo in broken vq check
> > > > > > > > > > > > > > v3->v4:
> > > > > > > > > > > > > > - Addressed comments from Michael
> > > > > > > > > > > > > > - renamed virtblk_request_cancel() to
> > > > > > > > > > > > > > virtblk_complete_request_with_ioerr()
> > > > > > > > > > > > > > - Added comments for
> > > > > > > > > > > > > > virtblk_complete_request_with_ioerr()
> > > > > > > > > > > > > > - Renamed virtblk_broken_device_cleanup() to
> > > > > > > > > > > > > > virtblk_cleanup_broken_device()
> > > > > > > > > > > > > > - Added comments for
> > > > > > > > > > > > > > virtblk_cleanup_broken_device()
> > > > > > > > > > > > > > - Moved the broken vq check in
> > > > > > > > > > > > > > virtblk_remove()
> > > > > > > > > > > > > > - Fixed comment style to have first empty line
> > > > > > > > > > > > > > - replaced freezed to frozen
> > > > > > > > > > > > > > - Fixed comments rephrased
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > v2->v3:
> > > > > > > > > > > > > > - Addressed comments from Michael
> > > > > > > > > > > > > > - updated comment for synchronizing with
> > > > > > > > > > > > > > callbacks
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > v1->v2:
> > > > > > > > > > > > > > - Addressed comments from Stephan
> > > > > > > > > > > > > > - fixed spelling to 'waiting'
> > > > > > > > > > > > > > - Addressed comments from Michael
> > > > > > > > > > > > > > - Dropped checking broken vq from queue_rq()
> > > > > > > > > > > > > > and
> > > > > queue_rqs()
> > > > > > > > > > > > > > because it is checked in lower layer
> > > > > > > > > > > > > > routines in virtio core
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > v0->v1:
> > > > > > > > > > > > > > - Fixed comments from Stefan to rename a
> > > > > > > > > > > > > > cleanup function
> > > > > > > > > > > > > > - Improved logic for handling any outstanding requests
> > > > > > > > > > > > > > in bio layer
> > > > > > > > > > > > > > - improved cancel callback to sync with
> > > > > > > > > > > > > > ongoing
> > > > > > > > > > > > > > done()
> > > > > > > > > > > > > > ---
> > > > > > > > > > > > > > drivers/block/virtio_blk.c | 95
> > > > > > > > > > > > > > ++++++++++++++++++++++++++++++++++++++
> > > > > > > > > > > > > > 1 file changed, 95 insertions(+)
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > diff --git a/drivers/block/virtio_blk.c
> > > > > > > > > > > > > > b/drivers/block/virtio_blk.c index
> > > > > > > > > > > > > > 7cffea01d868..c5e383c0ac48
> > > > > > > > > > > > > > 100644
> > > > > > > > > > > > > > --- a/drivers/block/virtio_blk.c
> > > > > > > > > > > > > > +++ b/drivers/block/virtio_blk.c
> > > > > > > > > > > > > > @@ -1554,6 +1554,98 @@ static int
> > > > > > > > > > > > > > virtblk_probe(struct virtio_device
> > > > > > > > > > > > > *vdev)
> > > > > > > > > > > > > > return err;
> > > > > > > > > > > > > > }
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > +/*
> > > > > > > > > > > > > > + * If the vq is broken, device will not complete requests.
> > > > > > > > > > > > > > + * So we do it for the device.
> > > > > > > > > > > > > > + */
> > > > > > > > > > > > > > +static bool
> > > > > > > > > > > > > > +virtblk_complete_request_with_ioerr(struct
> > > > > > > > > > > > > > +request *rq, void *data) {
> > > > > > > > > > > > > > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
> > > > > > > > > > > > > > + struct virtio_blk *vblk = data;
> > > > > > > > > > > > > > + struct virtio_blk_vq *vq;
> > > > > > > > > > > > > > + unsigned long flags;
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > + vq = &vblk->vqs[rq->mq_hctx->queue_num];
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > + spin_lock_irqsave(&vq->lock, flags);
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
> > > > > > > > > > > > > > + if (blk_mq_request_started(rq) &&
> > > > > > > > > !blk_mq_request_completed(rq))
> > > > > > > > > > > > > > + blk_mq_complete_request(rq);
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > + spin_unlock_irqrestore(&vq->lock, flags);
> > > > > > > > > > > > > > + return true; }
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > +/*
> > > > > > > > > > > > > > + * If the device is broken, it will not use
> > > > > > > > > > > > > > +any buffers and waiting
> > > > > > > > > > > > > > + * for that to happen is pointless. We'll do
> > > > > > > > > > > > > > +the cleanup in the driver,
> > > > > > > > > > > > > > + * completing all requests for the device.
> > > > > > > > > > > > > > + */
> > > > > > > > > > > > > > +static void
> > > > > > > > > > > > > > +virtblk_cleanup_broken_device(struct
> > > > > > > > > > > > > > +virtio_blk *vblk)
> > > > > > > {
> > > > > > > > > > > > > > + struct request_queue *q = vblk->disk->queue;
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > + /*
> > > > > > > > > > > > > > + * Start freezing the queue, so that new
> > > > > > > > > > > > > > +requests keeps
> > > > > > > > > waiting at the
> > > > > > > > > > > > > > + * door of bio_queue_enter(). We cannot
> > > > > > > > > > > > > > +fully freeze the queue
> > > > > > > > > > > > > because
> > > > > > > > > > > > > > + * frozen queue is an empty queue and there
> > > > > > > > > > > > > > +are pending
> > > > > > > > > requests, so
> > > > > > > > > > > > > > + * only start freezing it.
> > > > > > > > > > > > > > + */
> > > > > > > > > > > > > > + blk_freeze_queue_start(q);
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > + /*
> > > > > > > > > > > > > > + * When quiescing completes, all ongoing
> > > > > > > > > > > > > > +dispatches have
> > > > > > > > > completed
> > > > > > > > > > > > > > + * and no new dispatch will happen towards
> > > > > > > > > > > > > > +the
> > > > > driver.
> > > > > > > > > > > > > > + */
> > > > > > > > > > > > > > + blk_mq_quiesce_queue(q);
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > + /*
> > > > > > > > > > > > > > + * Synchronize with any ongoing VQ callbacks
> > > > > > > > > > > > > > +that may have
> > > > > > > > > started
> > > > > > > > > > > > > > + * before the VQs were marked as broken. Any
> > > > > > > > > > > > > > +outstanding
> > > > > > > > > requests
> > > > > > > > > > > > > > + * will be completed by
> > > > > > > > > virtblk_complete_request_with_ioerr().
> > > > > > > > > > > > > > + */
> > > > > > > > > > > > > > + virtio_synchronize_cbs(vblk->vdev);
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > + /*
> > > > > > > > > > > > > > + * At this point, no new requests can enter
> > > > > > > > > > > > > > +the
> > > > > > > > > > > > > > +queue_rq()
> > > > > > > > > and
> > > > > > > > > > > > > > + * completion routine will not complete any
> > > > > > > > > > > > > > +new requests either for
> > > > > > > > > > > > > the
> > > > > > > > > > > > > > + * broken vq. Hence, it is safe to cancel
> > > > > > > > > > > > > > +all requests
> > > > > which are
> > > > > > > > > > > > > > + * started.
> > > > > > > > > > > > > > + */
> > > > > > > > > > > > > > + blk_mq_tagset_busy_iter(&vblk->tag_set,
> > > > > > > > > > > > > > +
> > > > > virtblk_complete_request_with_ioerr,
> > > > > > > > > vblk);
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > +blk_mq_tagset_wait_completed_request(&vblk->t
> > > > > > > > > > > > > > +ag_s
> > > > > > > > > > > > > > +et);
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > + /*
> > > > > > > > > > > > > > + * All pending requests are cleaned up. Time
> > > > > > > > > > > > > > +to resume so
> > > > > > > > > that disk
> > > > > > > > > > > > > > + * deletion can be smooth. Start the HW
> > > > > > > > > > > > > > +queues so that when queue
> > > > > > > > > > > > > is
> > > > > > > > > > > > > > + * unquiesced requests can again enter the driver.
> > > > > > > > > > > > > > + */
> > > > > > > > > > > > > > + blk_mq_start_stopped_hw_queues(q, true);
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > + /*
> > > > > > > > > > > > > > + * Unquiescing will trigger dispatching any
> > > > > > > > > > > > > > +pending requests
> > > > > > > > > to the
> > > > > > > > > > > > > > + * driver which has crossed
> > > > > > > > > > > > > > +bio_queue_enter() to the
> > > > > driver.
> > > > > > > > > > > > > > + */
> > > > > > > > > > > > > > + blk_mq_unquiesce_queue(q);
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > + /*
> > > > > > > > > > > > > > + * Wait for all pending dispatches to
> > > > > > > > > > > > > > +terminate which may
> > > > > > > > > have been
> > > > > > > > > > > > > > + * initiated after unquiescing.
> > > > > > > > > > > > > > + */
> > > > > > > > > > > > > > + blk_mq_freeze_queue_wait(q);
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > + /*
> > > > > > > > > > > > > > + * Mark the disk dead so that once we
> > > > > > > > > > > > > > +unfreeze the queue,
> > > > > > > > > requests
> > > > > > > > > > > > > > + * waiting at the door of bio_queue_enter()
> > > > > > > > > > > > > > +can be aborted right
> > > > > > > > > > > > > away.
> > > > > > > > > > > > > > + */
> > > > > > > > > > > > > > + blk_mark_disk_dead(vblk->disk);
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > + /* Unfreeze the queue so that any waiting
> > > > > > > > > > > > > > +requests will be
> > > > > > > > > aborted. */
> > > > > > > > > > > > > > + blk_mq_unfreeze_queue_nomemrestore(q);
> > > > > > > > > > > > > > +}
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > static void virtblk_remove(struct virtio_device *vdev) {
> > > > > > > > > > > > > > struct virtio_blk *vblk = vdev->priv; @@
> > > > > > > > > > > > > > -1561,6
> > > > > > > > > > > > > > +1653,9 @@ static void virtblk_remove(struct
> > > > > > > > > > > > > > +virtio_device
> > > > > *vdev)
> > > > > > > > > > > > > > /* Make sure no work handler is accessing the device.
> > > > > */
> > > > > > > > > > > > > > flush_work(&vblk->config_work);
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > + if (virtqueue_is_broken(vblk->vqs[0].vq))
> > > > > > > > > > > > > > + virtblk_cleanup_broken_device(vblk);
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > del_gendisk(vblk->disk);
> > > > > > > > > > > > > > blk_mq_free_tag_set(&vblk->tag_set);
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > --
> > > > > > > > > > > > > > 2.34.1
> > > > > > > > > > > > > >
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal
2025-06-26 6:29 ` Parav Pandit
@ 2025-06-26 6:33 ` Michael S. Tsirkin
2025-06-26 9:19 ` Parav Pandit
0 siblings, 1 reply; 24+ messages in thread
From: Michael S. Tsirkin @ 2025-06-26 6:33 UTC (permalink / raw)
To: Parav Pandit
Cc: Stefan Hajnoczi, axboe@kernel.dk, virtualization@lists.linux.dev,
linux-block@vger.kernel.org, stable@vger.kernel.org,
NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni,
xuanzhuo@linux.alibaba.com, pbonzini@redhat.com,
jasowang@redhat.com, alok.a.tiwari@oracle.com, Max Gurtovoy,
Israel Rukshin
On Thu, Jun 26, 2025 at 06:29:09AM +0000, Parav Pandit wrote:
> > > > yes however this is not at all different that hotunplug right after reset.
> > > >
> > > For hotunplug after reset, we likely need a timeout handler.
> > > Because block driver running inside the remove() callback waiting for the IO,
> > may not get notified from driver core to synchronize ongoing remove().
> >
> >
> > Notified of what?
> Notification that surprise-removal occurred.
>
> > So is the scenario that graceful remove starts, and
> > meanwhile a surprise removal happens?
> >
> Right.
where is it stuck then? can you explain?
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal
2025-06-26 6:33 ` Michael S. Tsirkin
@ 2025-06-26 9:19 ` Parav Pandit
2025-06-27 12:21 ` Michael S. Tsirkin
0 siblings, 1 reply; 24+ messages in thread
From: Parav Pandit @ 2025-06-26 9:19 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Stefan Hajnoczi, axboe@kernel.dk, virtualization@lists.linux.dev,
linux-block@vger.kernel.org, stable@vger.kernel.org,
NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni,
xuanzhuo@linux.alibaba.com, pbonzini@redhat.com,
jasowang@redhat.com, alok.a.tiwari@oracle.com, Max Gurtovoy,
Israel Rukshin
> From: Michael S. Tsirkin <mst@redhat.com>
> Sent: 26 June 2025 12:04 PM
> To: Parav Pandit <parav@nvidia.com>
> Cc: Stefan Hajnoczi <stefanha@redhat.com>; axboe@kernel.dk;
> virtualization@lists.linux.dev; linux-block@vger.kernel.org;
> stable@vger.kernel.org; NBU-Contact-Li Rongqing (EXTERNAL)
> <lirongqing@baidu.com>; Chaitanya Kulkarni <chaitanyak@nvidia.com>;
> xuanzhuo@linux.alibaba.com; pbonzini@redhat.com;
> jasowang@redhat.com; alok.a.tiwari@oracle.com; Max Gurtovoy
> <mgurtovoy@nvidia.com>; Israel Rukshin <israelr@nvidia.com>
> Subject: Re: [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise
> removal
>
> On Thu, Jun 26, 2025 at 06:29:09AM +0000, Parav Pandit wrote:
> > > > > yes however this is not at all different that hotunplug right after reset.
> > > > >
> > > > For hotunplug after reset, we likely need a timeout handler.
> > > > Because block driver running inside the remove() callback waiting
> > > > for the IO,
> > > may not get notified from driver core to synchronize ongoing remove().
> > >
> > >
> > > Notified of what?
> > Notification that surprise-removal occurred.
> >
> > > So is the scenario that graceful remove starts, and meanwhile a
> > > surprise removal happens?
> > >
> > Right.
>
>
> where is it stuck then? can you explain?
I am not sure I understood the question.
Let me try:
Following scenario will hang even with the current fix:
Say,
1. the graceful removal is ongoing in the remove() callback, where disk deletion del_gendisk() is ongoing, which waits for the requests to complete,
2. Now few requests are yet to complete, and surprise removal started.
At this point, virtio block driver will not get notified by the driver core layer, because it is likely serializing remove() happening by user/driver unload and PCI hotplug driver-initiated device removal.
So vblk driver doesn't know that device is removed, block layer is waiting for requests completions to arrive which it never gets.
So del_gendisk() gets stuck.
This needs some kind of timeout handling to improve the situation to make removal more robust.
Did I answer or I didn't understand the question?
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal
2025-06-26 9:19 ` Parav Pandit
@ 2025-06-27 12:21 ` Michael S. Tsirkin
2025-06-27 14:00 ` Keith Busch
0 siblings, 1 reply; 24+ messages in thread
From: Michael S. Tsirkin @ 2025-06-27 12:21 UTC (permalink / raw)
To: Parav Pandit
Cc: Stefan Hajnoczi, axboe@kernel.dk, virtualization@lists.linux.dev,
linux-block@vger.kernel.org, stable@vger.kernel.org,
NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni,
xuanzhuo@linux.alibaba.com, pbonzini@redhat.com,
jasowang@redhat.com, alok.a.tiwari@oracle.com, Max Gurtovoy,
Israel Rukshin
On Thu, Jun 26, 2025 at 09:19:49AM +0000, Parav Pandit wrote:
>
> > From: Michael S. Tsirkin <mst@redhat.com>
> > Sent: 26 June 2025 12:04 PM
> > To: Parav Pandit <parav@nvidia.com>
> > Cc: Stefan Hajnoczi <stefanha@redhat.com>; axboe@kernel.dk;
> > virtualization@lists.linux.dev; linux-block@vger.kernel.org;
> > stable@vger.kernel.org; NBU-Contact-Li Rongqing (EXTERNAL)
> > <lirongqing@baidu.com>; Chaitanya Kulkarni <chaitanyak@nvidia.com>;
> > xuanzhuo@linux.alibaba.com; pbonzini@redhat.com;
> > jasowang@redhat.com; alok.a.tiwari@oracle.com; Max Gurtovoy
> > <mgurtovoy@nvidia.com>; Israel Rukshin <israelr@nvidia.com>
> > Subject: Re: [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise
> > removal
> >
> > On Thu, Jun 26, 2025 at 06:29:09AM +0000, Parav Pandit wrote:
> > > > > > yes however this is not at all different that hotunplug right after reset.
> > > > > >
> > > > > For hotunplug after reset, we likely need a timeout handler.
> > > > > Because block driver running inside the remove() callback waiting
> > > > > for the IO,
> > > > may not get notified from driver core to synchronize ongoing remove().
> > > >
> > > >
> > > > Notified of what?
> > > Notification that surprise-removal occurred.
> > >
> > > > So is the scenario that graceful remove starts, and meanwhile a
> > > > surprise removal happens?
> > > >
> > > Right.
> >
> >
> > where is it stuck then? can you explain?
>
> I am not sure I understood the question.
>
> Let me try:
> Following scenario will hang even with the current fix:
>
> Say,
> 1. the graceful removal is ongoing in the remove() callback, where disk deletion del_gendisk() is ongoing, which waits for the requests to complete,
>
> 2. Now few requests are yet to complete, and surprise removal started.
>
> At this point, virtio block driver will not get notified by the driver core layer, because it is likely serializing remove() happening by user/driver unload and PCI hotplug driver-initiated device removal.
> So vblk driver doesn't know that device is removed, block layer is waiting for requests completions to arrive which it never gets.
> So del_gendisk() gets stuck.
>
> This needs some kind of timeout handling to improve the situation to make removal more robust.
>
> Did I answer or I didn't understand the question?
You did, thanks! How do other drivers handle this? The issue seems generic.
--
MST
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal
2025-06-27 12:21 ` Michael S. Tsirkin
@ 2025-06-27 14:00 ` Keith Busch
0 siblings, 0 replies; 24+ messages in thread
From: Keith Busch @ 2025-06-27 14:00 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Parav Pandit, Stefan Hajnoczi, axboe@kernel.dk,
virtualization@lists.linux.dev, linux-block@vger.kernel.org,
stable@vger.kernel.org, NBU-Contact-Li Rongqing (EXTERNAL),
Chaitanya Kulkarni, xuanzhuo@linux.alibaba.com,
pbonzini@redhat.com, jasowang@redhat.com,
alok.a.tiwari@oracle.com, Max Gurtovoy, Israel Rukshin
On Fri, Jun 27, 2025 at 08:21:16AM -0400, Michael S. Tsirkin wrote:
>
> You did, thanks! How do other drivers handle this? The issue seems generic.
They implement blk_mq_ops' ".timeout" callback, which appears to be
missing in virtio_mq_ops.
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2025-06-27 14:00 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-02 2:44 [PATCH v5] virtio_blk: Fix disk deletion hang on device surprise removal Parav Pandit
2025-06-02 11:46 ` ALOK TIWARI
2025-06-18 9:29 ` Parav Pandit
2025-06-24 18:56 ` Stefan Hajnoczi
2025-06-24 19:01 ` Parav Pandit
2025-06-24 19:06 ` Michael S. Tsirkin
2025-06-24 19:11 ` Parav Pandit
2025-06-24 19:54 ` Michael S. Tsirkin
2025-06-25 2:55 ` Parav Pandit
2025-06-25 11:04 ` Michael S. Tsirkin
2025-06-25 11:08 ` Parav Pandit
2025-06-25 11:44 ` Michael S. Tsirkin
2025-06-25 19:08 ` Parav Pandit
2025-06-25 19:22 ` Michael S. Tsirkin
2025-06-26 3:26 ` Parav Pandit
2025-06-26 6:04 ` Michael S. Tsirkin
2025-06-26 6:29 ` Parav Pandit
2025-06-26 6:33 ` Michael S. Tsirkin
2025-06-26 9:19 ` Parav Pandit
2025-06-27 12:21 ` Michael S. Tsirkin
2025-06-27 14:00 ` Keith Busch
2025-06-24 19:06 ` Michael S. Tsirkin
2025-06-25 12:39 ` Michael S. Tsirkin
2025-06-25 19:01 ` Parav Pandit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox