* Re: [PATCH v1] virtio_blk: Fix disk deletion hang on device surprise removal [not found] <20250521062744.1361774-1-parav@nvidia.com> @ 2025-05-21 8:17 ` Michael S. Tsirkin 2025-05-21 9:14 ` Parav Pandit 2025-05-21 14:56 ` Stefan Hajnoczi 1 sibling, 1 reply; 17+ messages in thread From: Michael S. Tsirkin @ 2025-05-21 8:17 UTC (permalink / raw) To: Parav Pandit Cc: stefanha, axboe, virtualization, linux-block, stable, lirongqing, kch, xuanzhuo, pbonzini, jasowang, Max Gurtovoy, Israel Rukshin On Wed, May 21, 2025 at 06:37:41AM +0000, Parav Pandit wrote: > When the PCI device is surprise removed, requests may not complete > the device as the VQ is marked as broken. Due to this, the disk > deletion hangs. > > Fix it by aborting the requests when the VQ is broken. > > With this fix now fio completes swiftly. > An alternative of IO timeout has been considered, however > when the driver knows about unresponsive block device, swiftly clearing > them enables users and upper layers to react quickly. > > Verified with multiple device unplug iterations with pending requests in > virtio used ring and some pending with the device. > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio pci device") > Cc: stable@vger.kernel.org > Reported-by: lirongqing@baidu.com > Closes: https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b4741@baidu.com/ > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com> > Reviewed-by: Israel Rukshin <israelr@nvidia.com> > Signed-off-by: Parav Pandit <parav@nvidia.com> > --- > changelog: > v0->v1: > - Fixed comments from Stefan to rename a cleanup function > - Improved logic for handling any outstanding requests > in bio layer > - improved cancel callback to sync with ongoing done() thanks for the patch! questions: > --- > drivers/block/virtio_blk.c | 95 ++++++++++++++++++++++++++++++++++++++ > 1 file changed, 95 insertions(+) > > diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c > index 7cffea01d868..5212afdbd3c7 100644 > --- a/drivers/block/virtio_blk.c > +++ b/drivers/block/virtio_blk.c > @@ -435,6 +435,13 @@ static blk_status_t virtio_queue_rq(struct blk_mq_hw_ctx *hctx, > blk_status_t status; > int err; > > + /* Immediately fail all incoming requests if the vq is broken. > + * Once the queue is unquiesced, upper block layer flushes any pending > + * queued requests; fail them right away. > + */ > + if (unlikely(virtqueue_is_broken(vblk->vqs[qid].vq))) > + return BLK_STS_IOERR; > + > status = virtblk_prep_rq(hctx, vblk, req, vbr); > if (unlikely(status)) > return status; just below this: spin_lock_irqsave(&vblk->vqs[qid].lock, flags); err = virtblk_add_req(vblk->vqs[qid].vq, vbr); if (err) { and virtblk_add_req calls virtqueue_add_sgs, so it will fail on a broken vq. Why do we need to check it one extra time here? > @@ -508,6 +515,11 @@ static void virtio_queue_rqs(struct rq_list *rqlist) > while ((req = rq_list_pop(rqlist))) { > struct virtio_blk_vq *this_vq = get_virtio_blk_vq(req->mq_hctx); > > + if (unlikely(virtqueue_is_broken(this_vq->vq))) { > + rq_list_add_tail(&requeue_list, req); > + continue; > + } > + > if (vq && vq != this_vq) > virtblk_add_req_batch(vq, &submit_list); > vq = this_vq; similarly > @@ -1554,6 +1566,87 @@ static int virtblk_probe(struct virtio_device *vdev) > return err; > } > > +static bool virtblk_request_cancel(struct request *rq, void *data) > +{ > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq); > + struct virtio_blk *vblk = data; > + struct virtio_blk_vq *vq; > + unsigned long flags; > + > + vq = &vblk->vqs[rq->mq_hctx->queue_num]; > + > + spin_lock_irqsave(&vq->lock, flags); > + > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR; > + if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq)) > + blk_mq_complete_request(rq); > + > + spin_unlock_irqrestore(&vq->lock, flags); > + return true; > +} > + > +static void virtblk_broken_device_cleanup(struct virtio_blk *vblk) > +{ > + struct request_queue *q = vblk->disk->queue; > + > + if (!virtqueue_is_broken(vblk->vqs[0].vq)) > + return; > + > + /* Start freezing the queue, so that new requests keeps waitng at the > + * door of bio_queue_enter(). We cannot fully freeze the queue because > + * freezed queue is an empty queue and there are pending requests, so > + * only start freezing it. > + */ > + blk_freeze_queue_start(q); > + > + /* When quiescing completes, all ongoing dispatches have completed > + * and no new dispatch will happen towards the driver. > + * This ensures that later when cancel is attempted, then are not > + * getting processed by the queue_rq() or queue_rqs() handlers. > + */ > + blk_mq_quiesce_queue(q); > + > + /* > + * Synchronize with any ongoing VQ callbacks, effectively quiescing > + * the device and preventing it from completing further requests > + * to the block layer. Any outstanding, incomplete requests will be > + * completed by virtblk_request_cancel(). > + */ > + virtio_synchronize_cbs(vblk->vdev); > + > + /* At this point, no new requests can enter the queue_rq() and > + * completion routine will not complete any new requests either for the > + * broken vq. Hence, it is safe to cancel all requests which are > + * started. > + */ > + blk_mq_tagset_busy_iter(&vblk->tag_set, virtblk_request_cancel, vblk); > + blk_mq_tagset_wait_completed_request(&vblk->tag_set); > + > + /* All pending requests are cleaned up. Time to resume so that disk > + * deletion can be smooth. Start the HW queues so that when queue is > + * unquiesced requests can again enter the driver. > + */ > + blk_mq_start_stopped_hw_queues(q, true); > + > + /* Unquiescing will trigger dispatching any pending requests to the > + * driver which has crossed bio_queue_enter() to the driver. > + */ > + blk_mq_unquiesce_queue(q); > + > + /* Wait for all pending dispatches to terminate which may have been > + * initiated after unquiescing. > + */ > + blk_mq_freeze_queue_wait(q); > + > + /* Mark the disk dead so that once queue unfreeze, the requests > + * waiting at the door of bio_queue_enter() can be aborted right away. > + */ > + blk_mark_disk_dead(vblk->disk); > + > + /* Unfreeze the queue so that any waiting requests will be aborted. */ > + blk_mq_unfreeze_queue_nomemrestore(q); > +} > + > static void virtblk_remove(struct virtio_device *vdev) > { > struct virtio_blk *vblk = vdev->priv; > @@ -1561,6 +1654,8 @@ static void virtblk_remove(struct virtio_device *vdev) > /* Make sure no work handler is accessing the device. */ > flush_work(&vblk->config_work); > > + virtblk_broken_device_cleanup(vblk); > + > del_gendisk(vblk->disk); > blk_mq_free_tag_set(&vblk->tag_set); > > -- > 2.34.1 ^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH v1] virtio_blk: Fix disk deletion hang on device surprise removal 2025-05-21 8:17 ` [PATCH v1] virtio_blk: Fix disk deletion hang on device surprise removal Michael S. Tsirkin @ 2025-05-21 9:14 ` Parav Pandit 2025-05-21 9:19 ` Michael S. Tsirkin 0 siblings, 1 reply; 17+ messages in thread From: Parav Pandit @ 2025-05-21 9:14 UTC (permalink / raw) To: Michael S. Tsirkin Cc: stefanha@redhat.com, axboe@kernel.dk, virtualization@lists.linux.dev, linux-block@vger.kernel.or, stable@vger.kernel.org, NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni, xuanzhuo@linux.alibaba.com, pbonzini@redhat.com, jasowang@redhat.com, Max Gurtovoy, Israel Rukshin > From: Michael S. Tsirkin <mst@redhat.com> > Sent: Wednesday, May 21, 2025 1:48 PM > > On Wed, May 21, 2025 at 06:37:41AM +0000, Parav Pandit wrote: > > When the PCI device is surprise removed, requests may not complete the > > device as the VQ is marked as broken. Due to this, the disk deletion > > hangs. > > > > Fix it by aborting the requests when the VQ is broken. > > > > With this fix now fio completes swiftly. > > An alternative of IO timeout has been considered, however when the > > driver knows about unresponsive block device, swiftly clearing them > > enables users and upper layers to react quickly. > > > > Verified with multiple device unplug iterations with pending requests > > in virtio used ring and some pending with the device. > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio > > pci device") > > Cc: stable@vger.kernel.org > > Reported-by: lirongqing@baidu.com > > Closes: > > > https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b474 > > 1@baidu.com/ > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com> > > Reviewed-by: Israel Rukshin <israelr@nvidia.com> > > Signed-off-by: Parav Pandit <parav@nvidia.com> > > --- > > changelog: > > v0->v1: > > - Fixed comments from Stefan to rename a cleanup function > > - Improved logic for handling any outstanding requests > > in bio layer > > - improved cancel callback to sync with ongoing done() > > thanks for the patch! > questions: > > > > --- > > drivers/block/virtio_blk.c | 95 > > ++++++++++++++++++++++++++++++++++++++ > > 1 file changed, 95 insertions(+) > > > > diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c > > index 7cffea01d868..5212afdbd3c7 100644 > > --- a/drivers/block/virtio_blk.c > > +++ b/drivers/block/virtio_blk.c > > @@ -435,6 +435,13 @@ static blk_status_t virtio_queue_rq(struct > blk_mq_hw_ctx *hctx, > > blk_status_t status; > > int err; > > > > + /* Immediately fail all incoming requests if the vq is broken. > > + * Once the queue is unquiesced, upper block layer flushes any > pending > > + * queued requests; fail them right away. > > + */ > > + if (unlikely(virtqueue_is_broken(vblk->vqs[qid].vq))) > > + return BLK_STS_IOERR; > > + > > status = virtblk_prep_rq(hctx, vblk, req, vbr); > > if (unlikely(status)) > > return status; > > just below this: > spin_lock_irqsave(&vblk->vqs[qid].lock, flags); > err = virtblk_add_req(vblk->vqs[qid].vq, vbr); > if (err) { > > > and virtblk_add_req calls virtqueue_add_sgs, so it will fail on a broken vq. > > Why do we need to check it one extra time here? > It may work, but for some reason if the hw queue is stopped in this flow, it can hang the IOs flushing. I considered it risky to rely on the error code ENOSPC returned by non virtio-blk driver. In other words, if lower layer changed for some reason, we may end up in stopping the hw queue when broken, and requests would hang. Compared to that one-time entry check seems more robust. > > > > @@ -508,6 +515,11 @@ static void virtio_queue_rqs(struct rq_list *rqlist) > > while ((req = rq_list_pop(rqlist))) { > > struct virtio_blk_vq *this_vq = get_virtio_blk_vq(req- > >mq_hctx); > > > > + if (unlikely(virtqueue_is_broken(this_vq->vq))) { > > + rq_list_add_tail(&requeue_list, req); > > + continue; > > + } > > + > > if (vq && vq != this_vq) > > virtblk_add_req_batch(vq, &submit_list); > > vq = this_vq; > > similarly > The error code is not surfacing up here from virtblk_add_req(). It would end up adding checking for special error code here as well to abort by translating broken VQ -> EIO to break the loop in virtblk_add_req_batch(). Weighing on specific error code-based data path that may require audit from lower layers now and future, an explicit check of broken in this layer could be better. [..] ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v1] virtio_blk: Fix disk deletion hang on device surprise removal 2025-05-21 9:14 ` Parav Pandit @ 2025-05-21 9:19 ` Michael S. Tsirkin 2025-05-21 9:32 ` Parav Pandit 0 siblings, 1 reply; 17+ messages in thread From: Michael S. Tsirkin @ 2025-05-21 9:19 UTC (permalink / raw) To: Parav Pandit Cc: stefanha@redhat.com, axboe@kernel.dk, virtualization@lists.linux.dev, linux-block@vger.kernel.or, stable@vger.kernel.org, NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni, xuanzhuo@linux.alibaba.com, pbonzini@redhat.com, jasowang@redhat.com, Max Gurtovoy, Israel Rukshin On Wed, May 21, 2025 at 09:14:31AM +0000, Parav Pandit wrote: > > From: Michael S. Tsirkin <mst@redhat.com> > > Sent: Wednesday, May 21, 2025 1:48 PM > > > > On Wed, May 21, 2025 at 06:37:41AM +0000, Parav Pandit wrote: > > > When the PCI device is surprise removed, requests may not complete the > > > device as the VQ is marked as broken. Due to this, the disk deletion > > > hangs. > > > > > > Fix it by aborting the requests when the VQ is broken. > > > > > > With this fix now fio completes swiftly. > > > An alternative of IO timeout has been considered, however when the > > > driver knows about unresponsive block device, swiftly clearing them > > > enables users and upper layers to react quickly. > > > > > > Verified with multiple device unplug iterations with pending requests > > > in virtio used ring and some pending with the device. > > > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio > > > pci device") > > > Cc: stable@vger.kernel.org > > > Reported-by: lirongqing@baidu.com > > > Closes: > > > > > https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b474 > > > 1@baidu.com/ > > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com> > > > Reviewed-by: Israel Rukshin <israelr@nvidia.com> > > > Signed-off-by: Parav Pandit <parav@nvidia.com> > > > --- > > > changelog: > > > v0->v1: > > > - Fixed comments from Stefan to rename a cleanup function > > > - Improved logic for handling any outstanding requests > > > in bio layer > > > - improved cancel callback to sync with ongoing done() > > > > thanks for the patch! > > questions: > > > > > > > --- > > > drivers/block/virtio_blk.c | 95 > > > ++++++++++++++++++++++++++++++++++++++ > > > 1 file changed, 95 insertions(+) > > > > > > diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c > > > index 7cffea01d868..5212afdbd3c7 100644 > > > --- a/drivers/block/virtio_blk.c > > > +++ b/drivers/block/virtio_blk.c > > > @@ -435,6 +435,13 @@ static blk_status_t virtio_queue_rq(struct > > blk_mq_hw_ctx *hctx, > > > blk_status_t status; > > > int err; > > > > > > + /* Immediately fail all incoming requests if the vq is broken. > > > + * Once the queue is unquiesced, upper block layer flushes any > > pending > > > + * queued requests; fail them right away. > > > + */ > > > + if (unlikely(virtqueue_is_broken(vblk->vqs[qid].vq))) > > > + return BLK_STS_IOERR; > > > + > > > status = virtblk_prep_rq(hctx, vblk, req, vbr); > > > if (unlikely(status)) > > > return status; > > > > just below this: > > spin_lock_irqsave(&vblk->vqs[qid].lock, flags); > > err = virtblk_add_req(vblk->vqs[qid].vq, vbr); > > if (err) { > > > > > > and virtblk_add_req calls virtqueue_add_sgs, so it will fail on a broken vq. > > > > Why do we need to check it one extra time here? > > > It may work, but for some reason if the hw queue is stopped in this flow, it can hang the IOs flushing. > I considered it risky to rely on the error code ENOSPC returned by non virtio-blk driver. > In other words, if lower layer changed for some reason, we may end up in stopping the hw queue when broken, and requests would hang. > > Compared to that one-time entry check seems more robust. I don't get it. Checking twice in a row is more robust? What am I missing? Can you describe the scenario in more detail? > > > > > > > > @@ -508,6 +515,11 @@ static void virtio_queue_rqs(struct rq_list *rqlist) > > > while ((req = rq_list_pop(rqlist))) { > > > struct virtio_blk_vq *this_vq = get_virtio_blk_vq(req- > > >mq_hctx); > > > > > > + if (unlikely(virtqueue_is_broken(this_vq->vq))) { > > > + rq_list_add_tail(&requeue_list, req); > > > + continue; > > > + } > > > + > > > if (vq && vq != this_vq) > > > virtblk_add_req_batch(vq, &submit_list); > > > vq = this_vq; > > > > similarly > > > The error code is not surfacing up here from virtblk_add_req(). but wait a sec: static void virtblk_add_req_batch(struct virtio_blk_vq *vq, struct rq_list *rqlist) { struct request *req; unsigned long flags; bool kick; spin_lock_irqsave(&vq->lock, flags); while ((req = rq_list_pop(rqlist))) { struct virtblk_req *vbr = blk_mq_rq_to_pdu(req); int err; err = virtblk_add_req(vq->vq, vbr); if (err) { virtblk_unmap_data(req, vbr); virtblk_cleanup_cmd(req); blk_mq_requeue_request(req, true); } } kick = virtqueue_kick_prepare(vq->vq); spin_unlock_irqrestore(&vq->lock, flags); if (kick) virtqueue_notify(vq->vq); } it actually handles the error internally? > It would end up adding checking for special error code here as well to abort by translating broken VQ -> EIO to break the loop in virtblk_add_req_batch(). > > Weighing on specific error code-based data path that may require audit from lower layers now and future, an explicit check of broken in this layer could be better. > > [..] Checking add was successful is preferred because it has to be done *anyway* - device can get broken after you check before add. So I would like to understand why are we also checking explicitly and I do not get it so far. ^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH v1] virtio_blk: Fix disk deletion hang on device surprise removal 2025-05-21 9:19 ` Michael S. Tsirkin @ 2025-05-21 9:32 ` Parav Pandit 2025-05-21 10:16 ` Michael S. Tsirkin 0 siblings, 1 reply; 17+ messages in thread From: Parav Pandit @ 2025-05-21 9:32 UTC (permalink / raw) To: Michael S. Tsirkin Cc: stefanha@redhat.com, axboe@kernel.dk, virtualization@lists.linux.dev, linux-block@vger.kernel.org, stable@vger.kernel.org, NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni, xuanzhuo@linux.alibaba.com, pbonzini@redhat.com, jasowang@redhat.com, Max Gurtovoy, Israel Rukshin > From: Michael S. Tsirkin <mst@redhat.com> > Sent: Wednesday, May 21, 2025 2:49 PM > To: Parav Pandit <parav@nvidia.com> > Cc: stefanha@redhat.com; axboe@kernel.dk; virtualization@lists.linux.dev; > linux-block@vger.kernel.or; stable@vger.kernel.org; NBU-Contact-Li Rongqing > (EXTERNAL) <lirongqing@baidu.com>; Chaitanya Kulkarni > <chaitanyak@nvidia.com>; xuanzhuo@linux.alibaba.com; > pbonzini@redhat.com; jasowang@redhat.com; Max Gurtovoy > <mgurtovoy@nvidia.com>; Israel Rukshin <israelr@nvidia.com> > Subject: Re: [PATCH v1] virtio_blk: Fix disk deletion hang on device surprise > removal > > On Wed, May 21, 2025 at 09:14:31AM +0000, Parav Pandit wrote: > > > From: Michael S. Tsirkin <mst@redhat.com> > > > Sent: Wednesday, May 21, 2025 1:48 PM > > > > > > On Wed, May 21, 2025 at 06:37:41AM +0000, Parav Pandit wrote: > > > > When the PCI device is surprise removed, requests may not complete > > > > the device as the VQ is marked as broken. Due to this, the disk > > > > deletion hangs. > > > > > > > > Fix it by aborting the requests when the VQ is broken. > > > > > > > > With this fix now fio completes swiftly. > > > > An alternative of IO timeout has been considered, however when the > > > > driver knows about unresponsive block device, swiftly clearing > > > > them enables users and upper layers to react quickly. > > > > > > > > Verified with multiple device unplug iterations with pending > > > > requests in virtio used ring and some pending with the device. > > > > > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of > > > > virtio pci device") > > > > Cc: stable@vger.kernel.org > > > > Reported-by: lirongqing@baidu.com > > > > Closes: > > > > > > > https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b4 > > > 74 > > > > 1@baidu.com/ > > > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com> > > > > Reviewed-by: Israel Rukshin <israelr@nvidia.com> > > > > Signed-off-by: Parav Pandit <parav@nvidia.com> > > > > --- > > > > changelog: > > > > v0->v1: > > > > - Fixed comments from Stefan to rename a cleanup function > > > > - Improved logic for handling any outstanding requests > > > > in bio layer > > > > - improved cancel callback to sync with ongoing done() > > > > > > thanks for the patch! > > > questions: > > > > > > > > > > --- > > > > drivers/block/virtio_blk.c | 95 > > > > ++++++++++++++++++++++++++++++++++++++ > > > > 1 file changed, 95 insertions(+) > > > > > > > > diff --git a/drivers/block/virtio_blk.c > > > > b/drivers/block/virtio_blk.c index 7cffea01d868..5212afdbd3c7 > > > > 100644 > > > > --- a/drivers/block/virtio_blk.c > > > > +++ b/drivers/block/virtio_blk.c > > > > @@ -435,6 +435,13 @@ static blk_status_t virtio_queue_rq(struct > > > blk_mq_hw_ctx *hctx, > > > > blk_status_t status; > > > > int err; > > > > > > > > + /* Immediately fail all incoming requests if the vq is broken. > > > > + * Once the queue is unquiesced, upper block layer flushes any > > > pending > > > > + * queued requests; fail them right away. > > > > + */ > > > > + if (unlikely(virtqueue_is_broken(vblk->vqs[qid].vq))) > > > > + return BLK_STS_IOERR; > > > > + > > > > status = virtblk_prep_rq(hctx, vblk, req, vbr); > > > > if (unlikely(status)) > > > > return status; > > > > > > just below this: > > > spin_lock_irqsave(&vblk->vqs[qid].lock, flags); > > > err = virtblk_add_req(vblk->vqs[qid].vq, vbr); > > > if (err) { > > > > > > > > > and virtblk_add_req calls virtqueue_add_sgs, so it will fail on a broken vq. > > > > > > Why do we need to check it one extra time here? > > > > > It may work, but for some reason if the hw queue is stopped in this flow, it > can hang the IOs flushing. > > > I considered it risky to rely on the error code ENOSPC returned by non virtio- > blk driver. > > In other words, if lower layer changed for some reason, we may end up in > stopping the hw queue when broken, and requests would hang. > > > > Compared to that one-time entry check seems more robust. > > I don't get it. > Checking twice in a row is more robust? No. I am not confident on the relying on the error code -ENOSPC from layers outside of virtio-blk driver. If for a broken VQ, ENOSPC arrives, then hw queue is stopped and requests could be stuck. > What am I missing? > Can you describe the scenario in more detail? > > > > > > > > > > > > > @@ -508,6 +515,11 @@ static void virtio_queue_rqs(struct rq_list > *rqlist) > > > > while ((req = rq_list_pop(rqlist))) { > > > > struct virtio_blk_vq *this_vq = get_virtio_blk_vq(req- > > > >mq_hctx); > > > > > > > > + if (unlikely(virtqueue_is_broken(this_vq->vq))) { > > > > + rq_list_add_tail(&requeue_list, req); > > > > + continue; > > > > + } > > > > + > > > > if (vq && vq != this_vq) > > > > virtblk_add_req_batch(vq, &submit_list); > > > > vq = this_vq; > > > > > > similarly > > > > > The error code is not surfacing up here from virtblk_add_req(). > > > but wait a sec: > > static void virtblk_add_req_batch(struct virtio_blk_vq *vq, > struct rq_list *rqlist) > { > struct request *req; > unsigned long flags; > bool kick; > > spin_lock_irqsave(&vq->lock, flags); > > while ((req = rq_list_pop(rqlist))) { > struct virtblk_req *vbr = blk_mq_rq_to_pdu(req); > int err; > > err = virtblk_add_req(vq->vq, vbr); > if (err) { > virtblk_unmap_data(req, vbr); > virtblk_cleanup_cmd(req); > blk_mq_requeue_request(req, true); > } > } > > kick = virtqueue_kick_prepare(vq->vq); > spin_unlock_irqrestore(&vq->lock, flags); > > if (kick) > virtqueue_notify(vq->vq); } > > > it actually handles the error internally? > For all the errors it requeues the request here. > > > > > It would end up adding checking for special error code here as well to abort > by translating broken VQ -> EIO to break the loop in virtblk_add_req_batch(). > > > > Weighing on specific error code-based data path that may require audit from > lower layers now and future, an explicit check of broken in this layer could be > better. > > > > [..] > > > Checking add was successful is preferred because it has to be done > *anyway* - device can get broken after you check before add. > > So I would like to understand why are we also checking explicitly and I do not > get it so far. checking explicitly to not depend on specific error code-based logic. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v1] virtio_blk: Fix disk deletion hang on device surprise removal 2025-05-21 9:32 ` Parav Pandit @ 2025-05-21 10:16 ` Michael S. Tsirkin 2025-05-21 10:34 ` Parav Pandit 0 siblings, 1 reply; 17+ messages in thread From: Michael S. Tsirkin @ 2025-05-21 10:16 UTC (permalink / raw) To: Parav Pandit Cc: stefanha@redhat.com, axboe@kernel.dk, virtualization@lists.linux.dev, linux-block@vger.kernel.org, stable@vger.kernel.org, NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni, xuanzhuo@linux.alibaba.com, pbonzini@redhat.com, jasowang@redhat.com, Max Gurtovoy, Israel Rukshin On Wed, May 21, 2025 at 09:32:30AM +0000, Parav Pandit wrote: > > From: Michael S. Tsirkin <mst@redhat.com> > > Sent: Wednesday, May 21, 2025 2:49 PM > > To: Parav Pandit <parav@nvidia.com> > > Cc: stefanha@redhat.com; axboe@kernel.dk; virtualization@lists.linux.dev; > > linux-block@vger.kernel.or; stable@vger.kernel.org; NBU-Contact-Li Rongqing > > (EXTERNAL) <lirongqing@baidu.com>; Chaitanya Kulkarni > > <chaitanyak@nvidia.com>; xuanzhuo@linux.alibaba.com; > > pbonzini@redhat.com; jasowang@redhat.com; Max Gurtovoy > > <mgurtovoy@nvidia.com>; Israel Rukshin <israelr@nvidia.com> > > Subject: Re: [PATCH v1] virtio_blk: Fix disk deletion hang on device surprise > > removal > > > > On Wed, May 21, 2025 at 09:14:31AM +0000, Parav Pandit wrote: > > > > From: Michael S. Tsirkin <mst@redhat.com> > > > > Sent: Wednesday, May 21, 2025 1:48 PM > > > > > > > > On Wed, May 21, 2025 at 06:37:41AM +0000, Parav Pandit wrote: > > > > > When the PCI device is surprise removed, requests may not complete > > > > > the device as the VQ is marked as broken. Due to this, the disk > > > > > deletion hangs. > > > > > > > > > > Fix it by aborting the requests when the VQ is broken. > > > > > > > > > > With this fix now fio completes swiftly. > > > > > An alternative of IO timeout has been considered, however when the > > > > > driver knows about unresponsive block device, swiftly clearing > > > > > them enables users and upper layers to react quickly. > > > > > > > > > > Verified with multiple device unplug iterations with pending > > > > > requests in virtio used ring and some pending with the device. > > > > > > > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of > > > > > virtio pci device") > > > > > Cc: stable@vger.kernel.org > > > > > Reported-by: lirongqing@baidu.com > > > > > Closes: > > > > > > > > > https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b4 > > > > 74 > > > > > 1@baidu.com/ > > > > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com> > > > > > Reviewed-by: Israel Rukshin <israelr@nvidia.com> > > > > > Signed-off-by: Parav Pandit <parav@nvidia.com> > > > > > --- > > > > > changelog: > > > > > v0->v1: > > > > > - Fixed comments from Stefan to rename a cleanup function > > > > > - Improved logic for handling any outstanding requests > > > > > in bio layer > > > > > - improved cancel callback to sync with ongoing done() > > > > > > > > thanks for the patch! > > > > questions: > > > > > > > > > > > > > --- > > > > > drivers/block/virtio_blk.c | 95 > > > > > ++++++++++++++++++++++++++++++++++++++ > > > > > 1 file changed, 95 insertions(+) > > > > > > > > > > diff --git a/drivers/block/virtio_blk.c > > > > > b/drivers/block/virtio_blk.c index 7cffea01d868..5212afdbd3c7 > > > > > 100644 > > > > > --- a/drivers/block/virtio_blk.c > > > > > +++ b/drivers/block/virtio_blk.c > > > > > @@ -435,6 +435,13 @@ static blk_status_t virtio_queue_rq(struct > > > > blk_mq_hw_ctx *hctx, > > > > > blk_status_t status; > > > > > int err; > > > > > > > > > > + /* Immediately fail all incoming requests if the vq is broken. > > > > > + * Once the queue is unquiesced, upper block layer flushes any > > > > pending > > > > > + * queued requests; fail them right away. > > > > > + */ > > > > > + if (unlikely(virtqueue_is_broken(vblk->vqs[qid].vq))) > > > > > + return BLK_STS_IOERR; > > > > > + > > > > > status = virtblk_prep_rq(hctx, vblk, req, vbr); > > > > > if (unlikely(status)) > > > > > return status; > > > > > > > > just below this: > > > > spin_lock_irqsave(&vblk->vqs[qid].lock, flags); > > > > err = virtblk_add_req(vblk->vqs[qid].vq, vbr); > > > > if (err) { > > > > > > > > > > > > and virtblk_add_req calls virtqueue_add_sgs, so it will fail on a broken vq. > > > > > > > > Why do we need to check it one extra time here? > > > > > > > It may work, but for some reason if the hw queue is stopped in this flow, it > > can hang the IOs flushing. > > > > > I considered it risky to rely on the error code ENOSPC returned by non virtio- > > blk driver. > > > In other words, if lower layer changed for some reason, we may end up in > > stopping the hw queue when broken, and requests would hang. > > > > > > Compared to that one-time entry check seems more robust. > > > > I don't get it. > > Checking twice in a row is more robust? > No. I am not confident on the relying on the error code -ENOSPC from layers outside of virtio-blk driver. You can rely on virtio core to return an error on a broken vq. The error won't be -ENOSPC though, why would it? > If for a broken VQ, ENOSPC arrives, then hw queue is stopped and requests could be stuck. Can you describe the scenario in more detail pls? where does ENOSPC arrive from? when is the vq get broken ... > > What am I missing? > > Can you describe the scenario in more detail? > > > > > > > > > > > > > > > > > > @@ -508,6 +515,11 @@ static void virtio_queue_rqs(struct rq_list > > *rqlist) > > > > > while ((req = rq_list_pop(rqlist))) { > > > > > struct virtio_blk_vq *this_vq = get_virtio_blk_vq(req- > > > > >mq_hctx); > > > > > > > > > > + if (unlikely(virtqueue_is_broken(this_vq->vq))) { > > > > > + rq_list_add_tail(&requeue_list, req); > > > > > + continue; > > > > > + } > > > > > + > > > > > if (vq && vq != this_vq) > > > > > virtblk_add_req_batch(vq, &submit_list); > > > > > vq = this_vq; > > > > > > > > similarly > > > > > > > The error code is not surfacing up here from virtblk_add_req(). > > > > > > but wait a sec: > > > > static void virtblk_add_req_batch(struct virtio_blk_vq *vq, > > struct rq_list *rqlist) > > { > > struct request *req; > > unsigned long flags; > > bool kick; > > > > spin_lock_irqsave(&vq->lock, flags); > > > > while ((req = rq_list_pop(rqlist))) { > > struct virtblk_req *vbr = blk_mq_rq_to_pdu(req); > > int err; > > > > err = virtblk_add_req(vq->vq, vbr); > > if (err) { > > virtblk_unmap_data(req, vbr); > > virtblk_cleanup_cmd(req); > > blk_mq_requeue_request(req, true); > > } > > } > > > > kick = virtqueue_kick_prepare(vq->vq); > > spin_unlock_irqrestore(&vq->lock, flags); > > > > if (kick) > > virtqueue_notify(vq->vq); } > > > > > > it actually handles the error internally? > > > For all the errors it requeues the request here. ok and they will not prevent removal will they? > > > > > > > > > It would end up adding checking for special error code here as well to abort > > by translating broken VQ -> EIO to break the loop in virtblk_add_req_batch(). > > > > > > Weighing on specific error code-based data path that may require audit from > > lower layers now and future, an explicit check of broken in this layer could be > > better. > > > > > > [..] > > > > > > Checking add was successful is preferred because it has to be done > > *anyway* - device can get broken after you check before add. > > > > So I would like to understand why are we also checking explicitly and I do not > > get it so far. > > checking explicitly to not depend on specific error code-based logic. I do not understand. You must handle vq add errors anyway. -- MST ^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH v1] virtio_blk: Fix disk deletion hang on device surprise removal 2025-05-21 10:16 ` Michael S. Tsirkin @ 2025-05-21 10:34 ` Parav Pandit 2025-05-21 10:43 ` Michael S. Tsirkin 0 siblings, 1 reply; 17+ messages in thread From: Parav Pandit @ 2025-05-21 10:34 UTC (permalink / raw) To: Michael S. Tsirkin Cc: stefanha@redhat.com, axboe@kernel.dk, virtualization@lists.linux.dev, linux-block@vger.kernel.org, stable@vger.kernel.org, NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni, xuanzhuo@linux.alibaba.com, pbonzini@redhat.com, jasowang@redhat.com, Max Gurtovoy, Israel Rukshin > From: Michael S. Tsirkin <mst@redhat.com> > Sent: Wednesday, May 21, 2025 3:46 PM > > On Wed, May 21, 2025 at 09:32:30AM +0000, Parav Pandit wrote: > > > From: Michael S. Tsirkin <mst@redhat.com> > > > Sent: Wednesday, May 21, 2025 2:49 PM > > > To: Parav Pandit <parav@nvidia.com> > > > Cc: stefanha@redhat.com; axboe@kernel.dk; > > > virtualization@lists.linux.dev; linux-block@vger.kernel.or; > > > stable@vger.kernel.org; NBU-Contact-Li Rongqing > > > (EXTERNAL) <lirongqing@baidu.com>; Chaitanya Kulkarni > > > <chaitanyak@nvidia.com>; xuanzhuo@linux.alibaba.com; > > > pbonzini@redhat.com; jasowang@redhat.com; Max Gurtovoy > > > <mgurtovoy@nvidia.com>; Israel Rukshin <israelr@nvidia.com> > > > Subject: Re: [PATCH v1] virtio_blk: Fix disk deletion hang on device > > > surprise removal > > > > > > On Wed, May 21, 2025 at 09:14:31AM +0000, Parav Pandit wrote: > > > > > From: Michael S. Tsirkin <mst@redhat.com> > > > > > Sent: Wednesday, May 21, 2025 1:48 PM > > > > > > > > > > On Wed, May 21, 2025 at 06:37:41AM +0000, Parav Pandit wrote: > > > > > > When the PCI device is surprise removed, requests may not > > > > > > complete the device as the VQ is marked as broken. Due to > > > > > > this, the disk deletion hangs. > > > > > > > > > > > > Fix it by aborting the requests when the VQ is broken. > > > > > > > > > > > > With this fix now fio completes swiftly. > > > > > > An alternative of IO timeout has been considered, however when > > > > > > the driver knows about unresponsive block device, swiftly > > > > > > clearing them enables users and upper layers to react quickly. > > > > > > > > > > > > Verified with multiple device unplug iterations with pending > > > > > > requests in virtio used ring and some pending with the device. > > > > > > > > > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of > > > > > > virtio pci device") > > > > > > Cc: stable@vger.kernel.org > > > > > > Reported-by: lirongqing@baidu.com > > > > > > Closes: > > > > > > > > > > > https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73c > > > > > a9b4 > > > > > 74 > > > > > > 1@baidu.com/ > > > > > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com> > > > > > > Reviewed-by: Israel Rukshin <israelr@nvidia.com> > > > > > > Signed-off-by: Parav Pandit <parav@nvidia.com> > > > > > > --- > > > > > > changelog: > > > > > > v0->v1: > > > > > > - Fixed comments from Stefan to rename a cleanup function > > > > > > - Improved logic for handling any outstanding requests > > > > > > in bio layer > > > > > > - improved cancel callback to sync with ongoing done() > > > > > > > > > > thanks for the patch! > > > > > questions: > > > > > > > > > > > > > > > > --- > > > > > > drivers/block/virtio_blk.c | 95 > > > > > > ++++++++++++++++++++++++++++++++++++++ > > > > > > 1 file changed, 95 insertions(+) > > > > > > > > > > > > diff --git a/drivers/block/virtio_blk.c > > > > > > b/drivers/block/virtio_blk.c index 7cffea01d868..5212afdbd3c7 > > > > > > 100644 > > > > > > --- a/drivers/block/virtio_blk.c > > > > > > +++ b/drivers/block/virtio_blk.c > > > > > > @@ -435,6 +435,13 @@ static blk_status_t > > > > > > virtio_queue_rq(struct > > > > > blk_mq_hw_ctx *hctx, > > > > > > blk_status_t status; > > > > > > int err; > > > > > > > > > > > > + /* Immediately fail all incoming requests if the vq is broken. > > > > > > + * Once the queue is unquiesced, upper block layer flushes > > > > > > +any > > > > > pending > > > > > > + * queued requests; fail them right away. > > > > > > + */ > > > > > > + if (unlikely(virtqueue_is_broken(vblk->vqs[qid].vq))) > > > > > > + return BLK_STS_IOERR; > > > > > > + > > > > > > status = virtblk_prep_rq(hctx, vblk, req, vbr); > > > > > > if (unlikely(status)) > > > > > > return status; > > > > > > > > > > just below this: > > > > > spin_lock_irqsave(&vblk->vqs[qid].lock, flags); > > > > > err = virtblk_add_req(vblk->vqs[qid].vq, vbr); > > > > > if (err) { > > > > > > > > > > > > > > > and virtblk_add_req calls virtqueue_add_sgs, so it will fail on a broken > vq. > > > > > > > > > > Why do we need to check it one extra time here? > > > > > > > > > It may work, but for some reason if the hw queue is stopped in > > > > this flow, it > > > can hang the IOs flushing. > > > > > > > I considered it risky to rely on the error code ENOSPC returned by > > > > non virtio- > > > blk driver. > > > > In other words, if lower layer changed for some reason, we may end > > > > up in > > > stopping the hw queue when broken, and requests would hang. > > > > > > > > Compared to that one-time entry check seems more robust. > > > > > > I don't get it. > > > Checking twice in a row is more robust? > > No. I am not confident on the relying on the error code -ENOSPC from layers > outside of virtio-blk driver. > > You can rely on virtio core to return an error on a broken vq. > The error won't be -ENOSPC though, why would it? > Presently that is not the API contract between virtio core and driver. When the VQ is broken the error code is EIO. This is from the code inspection. If you prefer to rely on the code inspection of lower layer to define the virtio-blk, I am fine and remove the two checks. I just find it fragile, but if you prefer this way, I am fine. > > If for a broken VQ, ENOSPC arrives, then hw queue is stopped and requests > could be stuck. > > Can you describe the scenario in more detail pls? > where does ENOSPC arrive from? when is the vq get broken ... > ENOSPC arrives when it fails to enqueue the request in present form. EIO arrives when VQ is broken. If in the future, ENOSPC arrives for broken VQ, following flow can trigger a hang. cpu_0: virtblk_broken_device_cleanup() ... blk_mq_unquiesce_queue(); ... stage_1: blk_mq_freeze_queue_wait(). Cpu_1: Queue_rq() virtio_queue_rq() virtblk_add_req() -ENOSPC Stop_hw_queue() At this point, new requests in block layer may get stuck and may not be enqueued to queue_rq(). > > > > What am I missing? > > > Can you describe the scenario in more detail? > > > > > > > > > > > > > > > > > > > > > > > @@ -508,6 +515,11 @@ static void virtio_queue_rqs(struct > > > > > > rq_list > > > *rqlist) > > > > > > while ((req = rq_list_pop(rqlist))) { > > > > > > struct virtio_blk_vq *this_vq = get_virtio_blk_vq(req- > > > > > >mq_hctx); > > > > > > > > > > > > + if (unlikely(virtqueue_is_broken(this_vq->vq))) { > > > > > > + rq_list_add_tail(&requeue_list, req); > > > > > > + continue; > > > > > > + } > > > > > > + > > > > > > if (vq && vq != this_vq) > > > > > > virtblk_add_req_batch(vq, &submit_list); > > > > > > vq = this_vq; > > > > > > > > > > similarly > > > > > > > > > The error code is not surfacing up here from virtblk_add_req(). > > > > > > > > > but wait a sec: > > > > > > static void virtblk_add_req_batch(struct virtio_blk_vq *vq, > > > struct rq_list *rqlist) { > > > struct request *req; > > > unsigned long flags; > > > bool kick; > > > > > > spin_lock_irqsave(&vq->lock, flags); > > > > > > while ((req = rq_list_pop(rqlist))) { > > > struct virtblk_req *vbr = blk_mq_rq_to_pdu(req); > > > int err; > > > > > > err = virtblk_add_req(vq->vq, vbr); > > > if (err) { > > > virtblk_unmap_data(req, vbr); > > > virtblk_cleanup_cmd(req); > > > blk_mq_requeue_request(req, true); > > > } > > > } > > > > > > kick = virtqueue_kick_prepare(vq->vq); > > > spin_unlock_irqrestore(&vq->lock, flags); > > > > > > if (kick) > > > virtqueue_notify(vq->vq); } > > > > > > > > > it actually handles the error internally? > > > > > For all the errors it requeues the request here. > > ok and they will not prevent removal will they? > It should not prevent removal. One must be careful every single time changing it to make sure that hw queues are not stopped in lower layer, but may be this is ok. > > > > > > > > > > > > > > It would end up adding checking for special error code here as > > > > well to abort > > > by translating broken VQ -> EIO to break the loop in > virtblk_add_req_batch(). > > > > > > > > Weighing on specific error code-based data path that may require > > > > audit from > > > lower layers now and future, an explicit check of broken in this > > > layer could be better. > > > > > > > > [..] > > > > > > > > > Checking add was successful is preferred because it has to be done > > > *anyway* - device can get broken after you check before add. > > > > > > So I would like to understand why are we also checking explicitly > > > and I do not get it so far. > > > > checking explicitly to not depend on specific error code-based logic. > > > I do not understand. You must handle vq add errors anyway. I believe removal of the two vq broken checks should also be fine. I would probably add the comment in the code indicating virtio block driver assumes that ENOSPC is not returned for broken VQ. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v1] virtio_blk: Fix disk deletion hang on device surprise removal 2025-05-21 10:34 ` Parav Pandit @ 2025-05-21 10:43 ` Michael S. Tsirkin 2025-05-21 12:40 ` Parav Pandit 0 siblings, 1 reply; 17+ messages in thread From: Michael S. Tsirkin @ 2025-05-21 10:43 UTC (permalink / raw) To: Parav Pandit Cc: stefanha@redhat.com, axboe@kernel.dk, virtualization@lists.linux.dev, linux-block@vger.kernel.org, stable@vger.kernel.org, NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni, xuanzhuo@linux.alibaba.com, pbonzini@redhat.com, jasowang@redhat.com, Max Gurtovoy, Israel Rukshin On Wed, May 21, 2025 at 10:34:46AM +0000, Parav Pandit wrote: > > > From: Michael S. Tsirkin <mst@redhat.com> > > Sent: Wednesday, May 21, 2025 3:46 PM > > > > On Wed, May 21, 2025 at 09:32:30AM +0000, Parav Pandit wrote: > > > > From: Michael S. Tsirkin <mst@redhat.com> > > > > Sent: Wednesday, May 21, 2025 2:49 PM > > > > To: Parav Pandit <parav@nvidia.com> > > > > Cc: stefanha@redhat.com; axboe@kernel.dk; > > > > virtualization@lists.linux.dev; linux-block@vger.kernel.or; > > > > stable@vger.kernel.org; NBU-Contact-Li Rongqing > > > > (EXTERNAL) <lirongqing@baidu.com>; Chaitanya Kulkarni > > > > <chaitanyak@nvidia.com>; xuanzhuo@linux.alibaba.com; > > > > pbonzini@redhat.com; jasowang@redhat.com; Max Gurtovoy > > > > <mgurtovoy@nvidia.com>; Israel Rukshin <israelr@nvidia.com> > > > > Subject: Re: [PATCH v1] virtio_blk: Fix disk deletion hang on device > > > > surprise removal > > > > > > > > On Wed, May 21, 2025 at 09:14:31AM +0000, Parav Pandit wrote: > > > > > > From: Michael S. Tsirkin <mst@redhat.com> > > > > > > Sent: Wednesday, May 21, 2025 1:48 PM > > > > > > > > > > > > On Wed, May 21, 2025 at 06:37:41AM +0000, Parav Pandit wrote: > > > > > > > When the PCI device is surprise removed, requests may not > > > > > > > complete the device as the VQ is marked as broken. Due to > > > > > > > this, the disk deletion hangs. > > > > > > > > > > > > > > Fix it by aborting the requests when the VQ is broken. > > > > > > > > > > > > > > With this fix now fio completes swiftly. > > > > > > > An alternative of IO timeout has been considered, however when > > > > > > > the driver knows about unresponsive block device, swiftly > > > > > > > clearing them enables users and upper layers to react quickly. > > > > > > > > > > > > > > Verified with multiple device unplug iterations with pending > > > > > > > requests in virtio used ring and some pending with the device. > > > > > > > > > > > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of > > > > > > > virtio pci device") > > > > > > > Cc: stable@vger.kernel.org > > > > > > > Reported-by: lirongqing@baidu.com > > > > > > > Closes: > > > > > > > > > > > > > https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73c > > > > > > a9b4 > > > > > > 74 > > > > > > > 1@baidu.com/ > > > > > > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com> > > > > > > > Reviewed-by: Israel Rukshin <israelr@nvidia.com> > > > > > > > Signed-off-by: Parav Pandit <parav@nvidia.com> > > > > > > > --- > > > > > > > changelog: > > > > > > > v0->v1: > > > > > > > - Fixed comments from Stefan to rename a cleanup function > > > > > > > - Improved logic for handling any outstanding requests > > > > > > > in bio layer > > > > > > > - improved cancel callback to sync with ongoing done() > > > > > > > > > > > > thanks for the patch! > > > > > > questions: > > > > > > > > > > > > > > > > > > > --- > > > > > > > drivers/block/virtio_blk.c | 95 > > > > > > > ++++++++++++++++++++++++++++++++++++++ > > > > > > > 1 file changed, 95 insertions(+) > > > > > > > > > > > > > > diff --git a/drivers/block/virtio_blk.c > > > > > > > b/drivers/block/virtio_blk.c index 7cffea01d868..5212afdbd3c7 > > > > > > > 100644 > > > > > > > --- a/drivers/block/virtio_blk.c > > > > > > > +++ b/drivers/block/virtio_blk.c > > > > > > > @@ -435,6 +435,13 @@ static blk_status_t > > > > > > > virtio_queue_rq(struct > > > > > > blk_mq_hw_ctx *hctx, > > > > > > > blk_status_t status; > > > > > > > int err; > > > > > > > > > > > > > > + /* Immediately fail all incoming requests if the vq is broken. > > > > > > > + * Once the queue is unquiesced, upper block layer flushes > > > > > > > +any > > > > > > pending > > > > > > > + * queued requests; fail them right away. > > > > > > > + */ > > > > > > > + if (unlikely(virtqueue_is_broken(vblk->vqs[qid].vq))) > > > > > > > + return BLK_STS_IOERR; > > > > > > > + > > > > > > > status = virtblk_prep_rq(hctx, vblk, req, vbr); > > > > > > > if (unlikely(status)) > > > > > > > return status; > > > > > > > > > > > > just below this: > > > > > > spin_lock_irqsave(&vblk->vqs[qid].lock, flags); > > > > > > err = virtblk_add_req(vblk->vqs[qid].vq, vbr); > > > > > > if (err) { > > > > > > > > > > > > > > > > > > and virtblk_add_req calls virtqueue_add_sgs, so it will fail on a broken > > vq. > > > > > > > > > > > > Why do we need to check it one extra time here? > > > > > > > > > > > It may work, but for some reason if the hw queue is stopped in > > > > > this flow, it > > > > can hang the IOs flushing. > > > > > > > > > I considered it risky to rely on the error code ENOSPC returned by > > > > > non virtio- > > > > blk driver. > > > > > In other words, if lower layer changed for some reason, we may end > > > > > up in > > > > stopping the hw queue when broken, and requests would hang. > > > > > > > > > > Compared to that one-time entry check seems more robust. > > > > > > > > I don't get it. > > > > Checking twice in a row is more robust? > > > No. I am not confident on the relying on the error code -ENOSPC from layers > > outside of virtio-blk driver. > > > > You can rely on virtio core to return an error on a broken vq. > > The error won't be -ENOSPC though, why would it? > > > Presently that is not the API contract between virtio core and driver. > When the VQ is broken the error code is EIO. This is from the code inspection. yes > If you prefer to rely on the code inspection of lower layer to define the virtio-blk, I am fine and remove the two checks. > I just find it fragile, but if you prefer this way, I am fine. I think it's better, yes. > > > If for a broken VQ, ENOSPC arrives, then hw queue is stopped and requests > > could be stuck. > > > > Can you describe the scenario in more detail pls? > > where does ENOSPC arrive from? when is the vq get broken ... > > > ENOSPC arrives when it fails to enqueue the request in present form. > EIO arrives when VQ is broken. > > If in the future, ENOSPC arrives for broken VQ, following flow can trigger a hang. > > cpu_0: > virtblk_broken_device_cleanup() > ... > blk_mq_unquiesce_queue(); > ... stage_1: > blk_mq_freeze_queue_wait(). > > > Cpu_1: > Queue_rq() > virtio_queue_rq() > virtblk_add_req() > -ENOSPC > Stop_hw_queue() > At this point, new requests in block layer may get stuck and may not be enqueued to queue_rq(). > > > > > > > What am I missing? > > > > Can you describe the scenario in more detail? > > > > > > > > > > > > > > > > > > > > > > > > > > > > @@ -508,6 +515,11 @@ static void virtio_queue_rqs(struct > > > > > > > rq_list > > > > *rqlist) > > > > > > > while ((req = rq_list_pop(rqlist))) { > > > > > > > struct virtio_blk_vq *this_vq = get_virtio_blk_vq(req- > > > > > > >mq_hctx); > > > > > > > > > > > > > > + if (unlikely(virtqueue_is_broken(this_vq->vq))) { > > > > > > > + rq_list_add_tail(&requeue_list, req); > > > > > > > + continue; > > > > > > > + } > > > > > > > + > > > > > > > if (vq && vq != this_vq) > > > > > > > virtblk_add_req_batch(vq, &submit_list); > > > > > > > vq = this_vq; > > > > > > > > > > > > similarly > > > > > > > > > > > The error code is not surfacing up here from virtblk_add_req(). > > > > > > > > > > > > but wait a sec: > > > > > > > > static void virtblk_add_req_batch(struct virtio_blk_vq *vq, > > > > struct rq_list *rqlist) { > > > > struct request *req; > > > > unsigned long flags; > > > > bool kick; > > > > > > > > spin_lock_irqsave(&vq->lock, flags); > > > > > > > > while ((req = rq_list_pop(rqlist))) { > > > > struct virtblk_req *vbr = blk_mq_rq_to_pdu(req); > > > > int err; > > > > > > > > err = virtblk_add_req(vq->vq, vbr); > > > > if (err) { > > > > virtblk_unmap_data(req, vbr); > > > > virtblk_cleanup_cmd(req); > > > > blk_mq_requeue_request(req, true); > > > > } > > > > } > > > > > > > > kick = virtqueue_kick_prepare(vq->vq); > > > > spin_unlock_irqrestore(&vq->lock, flags); > > > > > > > > if (kick) > > > > virtqueue_notify(vq->vq); } > > > > > > > > > > > > it actually handles the error internally? > > > > > > > For all the errors it requeues the request here. > > > > ok and they will not prevent removal will they? > > > It should not prevent removal. > One must be careful every single time changing it to make sure that hw queues are not stopped in lower layer, but may be this is ok. > > > > > > > > > > > > > > > > > > > > It would end up adding checking for special error code here as > > > > > well to abort > > > > by translating broken VQ -> EIO to break the loop in > > virtblk_add_req_batch(). > > > > > > > > > > Weighing on specific error code-based data path that may require > > > > > audit from > > > > lower layers now and future, an explicit check of broken in this > > > > layer could be better. > > > > > > > > > > [..] > > > > > > > > > > > > Checking add was successful is preferred because it has to be done > > > > *anyway* - device can get broken after you check before add. > > > > > > > > So I would like to understand why are we also checking explicitly > > > > and I do not get it so far. > > > > > > checking explicitly to not depend on specific error code-based logic. > > > > > > I do not understand. You must handle vq add errors anyway. > > I believe removal of the two vq broken checks should also be fine. > I would probably add the comment in the code indicating virtio block driver assumes that ENOSPC is not returned for broken VQ. You can include this in the series if you like. Tweak to taste: --> virtio: document ENOSPC drivers handle ENOSPC specially since it's an error one can get from a working VQ. Document the semantics. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> --- diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index b784aab66867..97ab0cce527d 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -2296,6 +2296,10 @@ static inline int virtqueue_add(struct virtqueue *_vq, * at the same time (except where noted). * * Returns zero or a negative error (ie. ENOSPC, ENOMEM, EIO). + * + * NB: ENOSPC is a special code that is only returned on an attempt to add a + * buffer to a full VQ. It indicates that some buffers are outstanding and that + * the operation can be retried after some buffers have been used. */ int virtqueue_add_sgs(struct virtqueue *_vq, struct scatterlist *sgs[], ^ permalink raw reply related [flat|nested] 17+ messages in thread
* RE: [PATCH v1] virtio_blk: Fix disk deletion hang on device surprise removal 2025-05-21 10:43 ` Michael S. Tsirkin @ 2025-05-21 12:40 ` Parav Pandit 2025-05-21 16:49 ` Michael S. Tsirkin 0 siblings, 1 reply; 17+ messages in thread From: Parav Pandit @ 2025-05-21 12:40 UTC (permalink / raw) To: Michael S. Tsirkin Cc: stefanha@redhat.com, axboe@kernel.dk, virtualization@lists.linux.dev, linux-block@vger.kernel.org, stable@vger.kernel.org, NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni, xuanzhuo@linux.alibaba.com, pbonzini@redhat.com, jasowang@redhat.com, Max Gurtovoy, Israel Rukshin > From: Michael S. Tsirkin <mst@redhat.com> > Sent: Wednesday, May 21, 2025 4:13 PM > > On Wed, May 21, 2025 at 10:34:46AM +0000, Parav Pandit wrote: > > > > > From: Michael S. Tsirkin <mst@redhat.com> > > > Sent: Wednesday, May 21, 2025 3:46 PM > > > > > > On Wed, May 21, 2025 at 09:32:30AM +0000, Parav Pandit wrote: > > > > > From: Michael S. Tsirkin <mst@redhat.com> > > > > > Sent: Wednesday, May 21, 2025 2:49 PM > > > > > To: Parav Pandit <parav@nvidia.com> > > > > > Cc: stefanha@redhat.com; axboe@kernel.dk; > > > > > virtualization@lists.linux.dev; linux-block@vger.kernel.or; > > > > > stable@vger.kernel.org; NBU-Contact-Li Rongqing > > > > > (EXTERNAL) <lirongqing@baidu.com>; Chaitanya Kulkarni > > > > > <chaitanyak@nvidia.com>; xuanzhuo@linux.alibaba.com; > > > > > pbonzini@redhat.com; jasowang@redhat.com; Max Gurtovoy > > > > > <mgurtovoy@nvidia.com>; Israel Rukshin <israelr@nvidia.com> > > > > > Subject: Re: [PATCH v1] virtio_blk: Fix disk deletion hang on > > > > > device surprise removal > > > > > > > > > > On Wed, May 21, 2025 at 09:14:31AM +0000, Parav Pandit wrote: > > > > > > > From: Michael S. Tsirkin <mst@redhat.com> > > > > > > > Sent: Wednesday, May 21, 2025 1:48 PM > > > > > > > > > > > > > > On Wed, May 21, 2025 at 06:37:41AM +0000, Parav Pandit wrote: > > > > > > > > When the PCI device is surprise removed, requests may not > > > > > > > > complete the device as the VQ is marked as broken. Due to > > > > > > > > this, the disk deletion hangs. > > > > > > > > > > > > > > > > Fix it by aborting the requests when the VQ is broken. > > > > > > > > > > > > > > > > With this fix now fio completes swiftly. > > > > > > > > An alternative of IO timeout has been considered, however > > > > > > > > when the driver knows about unresponsive block device, > > > > > > > > swiftly clearing them enables users and upper layers to react > quickly. > > > > > > > > > > > > > > > > Verified with multiple device unplug iterations with > > > > > > > > pending requests in virtio used ring and some pending with the > device. > > > > > > > > > > > > > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal > > > > > > > > of virtio pci device") > > > > > > > > Cc: stable@vger.kernel.org > > > > > > > > Reported-by: lirongqing@baidu.com > > > > > > > > Closes: > > > > > > > > > > > > > > > https://lore.kernel.org/virtualization/c45dd68698cd47238c55f > > > > > > > b73c > > > > > > > a9b4 > > > > > > > 74 > > > > > > > > 1@baidu.com/ > > > > > > > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com> > > > > > > > > Reviewed-by: Israel Rukshin <israelr@nvidia.com> > > > > > > > > Signed-off-by: Parav Pandit <parav@nvidia.com> > > > > > > > > --- > > > > > > > > changelog: > > > > > > > > v0->v1: > > > > > > > > - Fixed comments from Stefan to rename a cleanup function > > > > > > > > - Improved logic for handling any outstanding requests > > > > > > > > in bio layer > > > > > > > > - improved cancel callback to sync with ongoing done() > > > > > > > > > > > > > > thanks for the patch! > > > > > > > questions: > > > > > > > > > > > > > > > > > > > > > > --- > > > > > > > > drivers/block/virtio_blk.c | 95 > > > > > > > > ++++++++++++++++++++++++++++++++++++++ > > > > > > > > 1 file changed, 95 insertions(+) > > > > > > > > > > > > > > > > diff --git a/drivers/block/virtio_blk.c > > > > > > > > b/drivers/block/virtio_blk.c index > > > > > > > > 7cffea01d868..5212afdbd3c7 > > > > > > > > 100644 > > > > > > > > --- a/drivers/block/virtio_blk.c > > > > > > > > +++ b/drivers/block/virtio_blk.c > > > > > > > > @@ -435,6 +435,13 @@ static blk_status_t > > > > > > > > virtio_queue_rq(struct > > > > > > > blk_mq_hw_ctx *hctx, > > > > > > > > blk_status_t status; > > > > > > > > int err; > > > > > > > > > > > > > > > > + /* Immediately fail all incoming requests if the vq is broken. > > > > > > > > + * Once the queue is unquiesced, upper block layer > > > > > > > > +flushes any > > > > > > > pending > > > > > > > > + * queued requests; fail them right away. > > > > > > > > + */ > > > > > > > > + if (unlikely(virtqueue_is_broken(vblk->vqs[qid].vq))) > > > > > > > > + return BLK_STS_IOERR; > > > > > > > > + > > > > > > > > status = virtblk_prep_rq(hctx, vblk, req, vbr); > > > > > > > > if (unlikely(status)) > > > > > > > > return status; > > > > > > > > > > > > > > just below this: > > > > > > > spin_lock_irqsave(&vblk->vqs[qid].lock, flags); > > > > > > > err = virtblk_add_req(vblk->vqs[qid].vq, vbr); > > > > > > > if (err) { > > > > > > > > > > > > > > > > > > > > > and virtblk_add_req calls virtqueue_add_sgs, so it will fail > > > > > > > on a broken > > > vq. > > > > > > > > > > > > > > Why do we need to check it one extra time here? > > > > > > > > > > > > > It may work, but for some reason if the hw queue is stopped in > > > > > > this flow, it > > > > > can hang the IOs flushing. > > > > > > > > > > > I considered it risky to rely on the error code ENOSPC > > > > > > returned by non virtio- > > > > > blk driver. > > > > > > In other words, if lower layer changed for some reason, we may > > > > > > end up in > > > > > stopping the hw queue when broken, and requests would hang. > > > > > > > > > > > > Compared to that one-time entry check seems more robust. > > > > > > > > > > I don't get it. > > > > > Checking twice in a row is more robust? > > > > No. I am not confident on the relying on the error code -ENOSPC > > > > from layers > > > outside of virtio-blk driver. > > > > > > You can rely on virtio core to return an error on a broken vq. > > > The error won't be -ENOSPC though, why would it? > > > > > Presently that is not the API contract between virtio core and driver. > > When the VQ is broken the error code is EIO. This is from the code > inspection. > > yes > > > If you prefer to rely on the code inspection of lower layer to define the virtio- > blk, I am fine and remove the two checks. > > I just find it fragile, but if you prefer this way, I am fine. > > I think it's better, yes. > Ok. Will adapt this in v2. > > > > If for a broken VQ, ENOSPC arrives, then hw queue is stopped and > > > > requests > > > could be stuck. > > > > > > Can you describe the scenario in more detail pls? > > > where does ENOSPC arrive from? when is the vq get broken ... > > > > > ENOSPC arrives when it fails to enqueue the request in present form. > > EIO arrives when VQ is broken. > > > > If in the future, ENOSPC arrives for broken VQ, following flow can trigger a > hang. > > > > cpu_0: > > virtblk_broken_device_cleanup() > > ... > > blk_mq_unquiesce_queue(); > > ... stage_1: > > blk_mq_freeze_queue_wait(). > > > > > > Cpu_1: > > Queue_rq() > > virtio_queue_rq() > > virtblk_add_req() > > -ENOSPC > > Stop_hw_queue() > > At this point, new requests in block layer may get stuck and may not > be enqueued to queue_rq(). > > > > > > > > > > What am I missing? > > > > > Can you describe the scenario in more detail? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > @@ -508,6 +515,11 @@ static void virtio_queue_rqs(struct > > > > > > > > rq_list > > > > > *rqlist) > > > > > > > > while ((req = rq_list_pop(rqlist))) { > > > > > > > > struct virtio_blk_vq *this_vq = get_virtio_blk_vq(req- > > > > > > > >mq_hctx); > > > > > > > > > > > > > > > > + if (unlikely(virtqueue_is_broken(this_vq->vq))) { > > > > > > > > + rq_list_add_tail(&requeue_list, req); > > > > > > > > + continue; > > > > > > > > + } > > > > > > > > + > > > > > > > > if (vq && vq != this_vq) > > > > > > > > virtblk_add_req_batch(vq, &submit_list); > > > > > > > > vq = this_vq; > > > > > > > > > > > > > > similarly > > > > > > > > > > > > > The error code is not surfacing up here from virtblk_add_req(). > > > > > > > > > > > > > > > but wait a sec: > > > > > > > > > > static void virtblk_add_req_batch(struct virtio_blk_vq *vq, > > > > > struct rq_list *rqlist) { > > > > > struct request *req; > > > > > unsigned long flags; > > > > > bool kick; > > > > > > > > > > spin_lock_irqsave(&vq->lock, flags); > > > > > > > > > > while ((req = rq_list_pop(rqlist))) { > > > > > struct virtblk_req *vbr = blk_mq_rq_to_pdu(req); > > > > > int err; > > > > > > > > > > err = virtblk_add_req(vq->vq, vbr); > > > > > if (err) { > > > > > virtblk_unmap_data(req, vbr); > > > > > virtblk_cleanup_cmd(req); > > > > > blk_mq_requeue_request(req, true); > > > > > } > > > > > } > > > > > > > > > > kick = virtqueue_kick_prepare(vq->vq); > > > > > spin_unlock_irqrestore(&vq->lock, flags); > > > > > > > > > > if (kick) > > > > > virtqueue_notify(vq->vq); } > > > > > > > > > > > > > > > it actually handles the error internally? > > > > > > > > > For all the errors it requeues the request here. > > > > > > ok and they will not prevent removal will they? > > > > > It should not prevent removal. > > One must be careful every single time changing it to make sure that hw > queues are not stopped in lower layer, but may be this is ok. > > > > > > > > > > > > > > > > > > > > > > > > > > It would end up adding checking for special error code here as > > > > > > well to abort > > > > > by translating broken VQ -> EIO to break the loop in > > > virtblk_add_req_batch(). > > > > > > > > > > > > Weighing on specific error code-based data path that may > > > > > > require audit from > > > > > lower layers now and future, an explicit check of broken in this > > > > > layer could be better. > > > > > > > > > > > > [..] > > > > > > > > > > > > > > > Checking add was successful is preferred because it has to be > > > > > done > > > > > *anyway* - device can get broken after you check before add. > > > > > > > > > > So I would like to understand why are we also checking > > > > > explicitly and I do not get it so far. > > > > > > > > checking explicitly to not depend on specific error code-based logic. > > > > > > > > > I do not understand. You must handle vq add errors anyway. > > > > I believe removal of the two vq broken checks should also be fine. > > I would probably add the comment in the code indicating virtio block driver > assumes that ENOSPC is not returned for broken VQ. > > You can include this in the series if you like. Tweak to taste: > Thanks for the patch. This virtio core comment update can be done at later point without fixes tag. Will park for the later. > --> > > virtio: document ENOSPC > > drivers handle ENOSPC specially since it's an error one can get from a working > VQ. Document the semantics. > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com> > > --- > > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index > b784aab66867..97ab0cce527d 100644 > --- a/drivers/virtio/virtio_ring.c > +++ b/drivers/virtio/virtio_ring.c > @@ -2296,6 +2296,10 @@ static inline int virtqueue_add(struct virtqueue > *_vq, > * at the same time (except where noted). > * > * Returns zero or a negative error (ie. ENOSPC, ENOMEM, EIO). > + * > + * NB: ENOSPC is a special code that is only returned on an attempt to > + add a > + * buffer to a full VQ. It indicates that some buffers are outstanding > + and that > + * the operation can be retried after some buffers have been used. > */ > int virtqueue_add_sgs(struct virtqueue *_vq, > struct scatterlist *sgs[], ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v1] virtio_blk: Fix disk deletion hang on device surprise removal 2025-05-21 12:40 ` Parav Pandit @ 2025-05-21 16:49 ` Michael S. Tsirkin 0 siblings, 0 replies; 17+ messages in thread From: Michael S. Tsirkin @ 2025-05-21 16:49 UTC (permalink / raw) To: Parav Pandit Cc: stefanha@redhat.com, axboe@kernel.dk, virtualization@lists.linux.dev, linux-block@vger.kernel.org, stable@vger.kernel.org, NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni, xuanzhuo@linux.alibaba.com, pbonzini@redhat.com, jasowang@redhat.com, Max Gurtovoy, Israel Rukshin On Wed, May 21, 2025 at 12:40:10PM +0000, Parav Pandit wrote: > > You can include this in the series if you like. Tweak to taste: > > > Thanks for the patch. This virtio core comment update can be done at later point without fixes tag. > Will park for the later. no problem with that, either. ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v1] virtio_blk: Fix disk deletion hang on device surprise removal [not found] <20250521062744.1361774-1-parav@nvidia.com> 2025-05-21 8:17 ` [PATCH v1] virtio_blk: Fix disk deletion hang on device surprise removal Michael S. Tsirkin @ 2025-05-21 14:56 ` Stefan Hajnoczi 2025-05-22 2:57 ` Parav Pandit 1 sibling, 1 reply; 17+ messages in thread From: Stefan Hajnoczi @ 2025-05-21 14:56 UTC (permalink / raw) To: Parav Pandit Cc: mst, axboe, virtualization, linux-block, stable, lirongqing, kch, xuanzhuo, pbonzini, jasowang, Max Gurtovoy, Israel Rukshin [-- Attachment #1: Type: text/plain, Size: 6519 bytes --] On Wed, May 21, 2025 at 06:37:41AM +0000, Parav Pandit wrote: > When the PCI device is surprise removed, requests may not complete > the device as the VQ is marked as broken. Due to this, the disk > deletion hangs. > > Fix it by aborting the requests when the VQ is broken. > > With this fix now fio completes swiftly. > An alternative of IO timeout has been considered, however > when the driver knows about unresponsive block device, swiftly clearing > them enables users and upper layers to react quickly. > > Verified with multiple device unplug iterations with pending requests in > virtio used ring and some pending with the device. > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio pci device") > Cc: stable@vger.kernel.org > Reported-by: lirongqing@baidu.com > Closes: https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b4741@baidu.com/ > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com> > Reviewed-by: Israel Rukshin <israelr@nvidia.com> > Signed-off-by: Parav Pandit <parav@nvidia.com> > --- > changelog: > v0->v1: > - Fixed comments from Stefan to rename a cleanup function > - Improved logic for handling any outstanding requests > in bio layer > - improved cancel callback to sync with ongoing done() > > --- > drivers/block/virtio_blk.c | 95 ++++++++++++++++++++++++++++++++++++++ > 1 file changed, 95 insertions(+) > > diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c > index 7cffea01d868..5212afdbd3c7 100644 > --- a/drivers/block/virtio_blk.c > +++ b/drivers/block/virtio_blk.c > @@ -435,6 +435,13 @@ static blk_status_t virtio_queue_rq(struct blk_mq_hw_ctx *hctx, > blk_status_t status; > int err; > > + /* Immediately fail all incoming requests if the vq is broken. > + * Once the queue is unquiesced, upper block layer flushes any pending > + * queued requests; fail them right away. > + */ > + if (unlikely(virtqueue_is_broken(vblk->vqs[qid].vq))) > + return BLK_STS_IOERR; > + > status = virtblk_prep_rq(hctx, vblk, req, vbr); > if (unlikely(status)) > return status; > @@ -508,6 +515,11 @@ static void virtio_queue_rqs(struct rq_list *rqlist) > while ((req = rq_list_pop(rqlist))) { > struct virtio_blk_vq *this_vq = get_virtio_blk_vq(req->mq_hctx); > > + if (unlikely(virtqueue_is_broken(this_vq->vq))) { > + rq_list_add_tail(&requeue_list, req); > + continue; > + } > + > if (vq && vq != this_vq) > virtblk_add_req_batch(vq, &submit_list); > vq = this_vq; > @@ -1554,6 +1566,87 @@ static int virtblk_probe(struct virtio_device *vdev) > return err; > } > > +static bool virtblk_request_cancel(struct request *rq, void *data) > +{ > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq); > + struct virtio_blk *vblk = data; > + struct virtio_blk_vq *vq; > + unsigned long flags; > + > + vq = &vblk->vqs[rq->mq_hctx->queue_num]; > + > + spin_lock_irqsave(&vq->lock, flags); > + > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR; > + if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq)) > + blk_mq_complete_request(rq); > + > + spin_unlock_irqrestore(&vq->lock, flags); > + return true; > +} > + > +static void virtblk_broken_device_cleanup(struct virtio_blk *vblk) > +{ > + struct request_queue *q = vblk->disk->queue; > + > + if (!virtqueue_is_broken(vblk->vqs[0].vq)) > + return; Can a subset of virtqueues be broken? If so, then this code doesn't handle it. > + > + /* Start freezing the queue, so that new requests keeps waitng at the s/waitng/waiting/ > + * door of bio_queue_enter(). We cannot fully freeze the queue because > + * freezed queue is an empty queue and there are pending requests, so > + * only start freezing it. > + */ > + blk_freeze_queue_start(q); > + > + /* When quiescing completes, all ongoing dispatches have completed > + * and no new dispatch will happen towards the driver. > + * This ensures that later when cancel is attempted, then are not > + * getting processed by the queue_rq() or queue_rqs() handlers. > + */ > + blk_mq_quiesce_queue(q); > + > + /* > + * Synchronize with any ongoing VQ callbacks, effectively quiescing > + * the device and preventing it from completing further requests > + * to the block layer. Any outstanding, incomplete requests will be > + * completed by virtblk_request_cancel(). > + */ > + virtio_synchronize_cbs(vblk->vdev); > + > + /* At this point, no new requests can enter the queue_rq() and > + * completion routine will not complete any new requests either for the > + * broken vq. Hence, it is safe to cancel all requests which are > + * started. > + */ > + blk_mq_tagset_busy_iter(&vblk->tag_set, virtblk_request_cancel, vblk); Although virtio_synchronize_cbs() was called, a broken/malicious device can still raise IRQs. Would that lead to use-after-free or similar undefined behavior for requests that have been submitted to the device? It seems safer to reset the device before marking the requests as failed. > + blk_mq_tagset_wait_completed_request(&vblk->tag_set); > + > + /* All pending requests are cleaned up. Time to resume so that disk > + * deletion can be smooth. Start the HW queues so that when queue is > + * unquiesced requests can again enter the driver. > + */ > + blk_mq_start_stopped_hw_queues(q, true); > + > + /* Unquiescing will trigger dispatching any pending requests to the > + * driver which has crossed bio_queue_enter() to the driver. > + */ > + blk_mq_unquiesce_queue(q); > + > + /* Wait for all pending dispatches to terminate which may have been > + * initiated after unquiescing. > + */ > + blk_mq_freeze_queue_wait(q); > + > + /* Mark the disk dead so that once queue unfreeze, the requests > + * waiting at the door of bio_queue_enter() can be aborted right away. > + */ > + blk_mark_disk_dead(vblk->disk); > + > + /* Unfreeze the queue so that any waiting requests will be aborted. */ > + blk_mq_unfreeze_queue_nomemrestore(q); > +} > + > static void virtblk_remove(struct virtio_device *vdev) > { > struct virtio_blk *vblk = vdev->priv; > @@ -1561,6 +1654,8 @@ static void virtblk_remove(struct virtio_device *vdev) > /* Make sure no work handler is accessing the device. */ > flush_work(&vblk->config_work); > > + virtblk_broken_device_cleanup(vblk); > + > del_gendisk(vblk->disk); > blk_mq_free_tag_set(&vblk->tag_set); > > -- > 2.34.1 > [-- Attachment #2: signature.asc --] [-- Type: application/pgp-signature, Size: 488 bytes --] ^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH v1] virtio_blk: Fix disk deletion hang on device surprise removal 2025-05-21 14:56 ` Stefan Hajnoczi @ 2025-05-22 2:57 ` Parav Pandit 2025-05-22 14:36 ` Stefan Hajnoczi 0 siblings, 1 reply; 17+ messages in thread From: Parav Pandit @ 2025-05-22 2:57 UTC (permalink / raw) To: Stefan Hajnoczi Cc: mst@redhat.com, axboe@kernel.dk, virtualization@lists.linux.dev, linux-block@vger.kernel.org, stable@vger.kernel.org, NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni, xuanzhuo@linux.alibaba.com, pbonzini@redhat.com, jasowang@redhat.com, Max Gurtovoy, Israel Rukshin > From: Stefan Hajnoczi <stefanha@redhat.com> > Sent: Wednesday, May 21, 2025 8:27 PM > > On Wed, May 21, 2025 at 06:37:41AM +0000, Parav Pandit wrote: > > When the PCI device is surprise removed, requests may not complete the > > device as the VQ is marked as broken. Due to this, the disk deletion > > hangs. > > > > Fix it by aborting the requests when the VQ is broken. > > > > With this fix now fio completes swiftly. > > An alternative of IO timeout has been considered, however when the > > driver knows about unresponsive block device, swiftly clearing them > > enables users and upper layers to react quickly. > > > > Verified with multiple device unplug iterations with pending requests > > in virtio used ring and some pending with the device. > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio > > pci device") > > Cc: stable@vger.kernel.org > > Reported-by: lirongqing@baidu.com > > Closes: > > https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b474 > > 1@baidu.com/ > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com> > > Reviewed-by: Israel Rukshin <israelr@nvidia.com> > > Signed-off-by: Parav Pandit <parav@nvidia.com> > > --- > > changelog: > > v0->v1: > > - Fixed comments from Stefan to rename a cleanup function > > - Improved logic for handling any outstanding requests > > in bio layer > > - improved cancel callback to sync with ongoing done() > > > > --- > > drivers/block/virtio_blk.c | 95 > > ++++++++++++++++++++++++++++++++++++++ > > 1 file changed, 95 insertions(+) > > > > diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c > > index 7cffea01d868..5212afdbd3c7 100644 > > --- a/drivers/block/virtio_blk.c > > +++ b/drivers/block/virtio_blk.c > > @@ -435,6 +435,13 @@ static blk_status_t virtio_queue_rq(struct > blk_mq_hw_ctx *hctx, > > blk_status_t status; > > int err; > > > > + /* Immediately fail all incoming requests if the vq is broken. > > + * Once the queue is unquiesced, upper block layer flushes any > pending > > + * queued requests; fail them right away. > > + */ > > + if (unlikely(virtqueue_is_broken(vblk->vqs[qid].vq))) > > + return BLK_STS_IOERR; > > + > > status = virtblk_prep_rq(hctx, vblk, req, vbr); > > if (unlikely(status)) > > return status; > > @@ -508,6 +515,11 @@ static void virtio_queue_rqs(struct rq_list *rqlist) > > while ((req = rq_list_pop(rqlist))) { > > struct virtio_blk_vq *this_vq = get_virtio_blk_vq(req- > >mq_hctx); > > > > + if (unlikely(virtqueue_is_broken(this_vq->vq))) { > > + rq_list_add_tail(&requeue_list, req); > > + continue; > > + } > > + > > if (vq && vq != this_vq) > > virtblk_add_req_batch(vq, &submit_list); > > vq = this_vq; > > @@ -1554,6 +1566,87 @@ static int virtblk_probe(struct virtio_device > *vdev) > > return err; > > } > > > > +static bool virtblk_request_cancel(struct request *rq, void *data) { > > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq); > > + struct virtio_blk *vblk = data; > > + struct virtio_blk_vq *vq; > > + unsigned long flags; > > + > > + vq = &vblk->vqs[rq->mq_hctx->queue_num]; > > + > > + spin_lock_irqsave(&vq->lock, flags); > > + > > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR; > > + if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq)) > > + blk_mq_complete_request(rq); > > + > > + spin_unlock_irqrestore(&vq->lock, flags); > > + return true; > > +} > > + > > +static void virtblk_broken_device_cleanup(struct virtio_blk *vblk) { > > + struct request_queue *q = vblk->disk->queue; > > + > > + if (!virtqueue_is_broken(vblk->vqs[0].vq)) > > + return; > > Can a subset of virtqueues be broken? If so, then this code doesn't handle it. On device removal all the VQs are broken. This check only uses a VQ to decide on. In future may be more elaborate API to have virtio_dev_broken() can be added. Prefer to keep this patch without extending many APIs given it has Fixes tag. > > > + > > + /* Start freezing the queue, so that new requests keeps waitng at > > +the > > s/waitng/waiting/ > Ack. > > + * door of bio_queue_enter(). We cannot fully freeze the queue > because > > + * freezed queue is an empty queue and there are pending requests, > so > > + * only start freezing it. > > + */ > > + blk_freeze_queue_start(q); > > + > > + /* When quiescing completes, all ongoing dispatches have completed > > + * and no new dispatch will happen towards the driver. > > + * This ensures that later when cancel is attempted, then are not > > + * getting processed by the queue_rq() or queue_rqs() handlers. > > + */ > > + blk_mq_quiesce_queue(q); > > + > > + /* > > + * Synchronize with any ongoing VQ callbacks, effectively quiescing > > + * the device and preventing it from completing further requests > > + * to the block layer. Any outstanding, incomplete requests will be > > + * completed by virtblk_request_cancel(). > > + */ > > + virtio_synchronize_cbs(vblk->vdev); > > + > > + /* At this point, no new requests can enter the queue_rq() and > > + * completion routine will not complete any new requests either for > the > > + * broken vq. Hence, it is safe to cancel all requests which are > > + * started. > > + */ > > + blk_mq_tagset_busy_iter(&vblk->tag_set, virtblk_request_cancel, > > +vblk); > > Although virtio_synchronize_cbs() was called, a broken/malicious device can > still raise IRQs. Would that lead to use-after-free or similar undefined > behavior for requests that have been submitted to the device? > It shouldn't because vring_interrupt() also checks for the broken VQ before invoking the _done(). Once the VQ is broken and even if _done() is invoked, it wont progress further on get_buf(). And VQs are freed later in del_vq() after the device is reset as you suggested. > It seems safer to reset the device before marking the requests as failed. > Such addition should be avoided because when the device is surprise removed, even reset will not complete. > > + blk_mq_tagset_wait_completed_request(&vblk->tag_set); > > + > > + /* All pending requests are cleaned up. Time to resume so that disk > > + * deletion can be smooth. Start the HW queues so that when queue > is > > + * unquiesced requests can again enter the driver. > > + */ > > + blk_mq_start_stopped_hw_queues(q, true); > > + > > + /* Unquiescing will trigger dispatching any pending requests to the > > + * driver which has crossed bio_queue_enter() to the driver. > > + */ > > + blk_mq_unquiesce_queue(q); > > + > > + /* Wait for all pending dispatches to terminate which may have been > > + * initiated after unquiescing. > > + */ > > + blk_mq_freeze_queue_wait(q); > > + > > + /* Mark the disk dead so that once queue unfreeze, the requests > > + * waiting at the door of bio_queue_enter() can be aborted right > away. > > + */ > > + blk_mark_disk_dead(vblk->disk); > > + > > + /* Unfreeze the queue so that any waiting requests will be aborted. > */ > > + blk_mq_unfreeze_queue_nomemrestore(q); > > +} > > + > > static void virtblk_remove(struct virtio_device *vdev) { > > struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1654,8 @@ static > > void virtblk_remove(struct virtio_device *vdev) > > /* Make sure no work handler is accessing the device. */ > > flush_work(&vblk->config_work); > > > > + virtblk_broken_device_cleanup(vblk); > > + > > del_gendisk(vblk->disk); > > blk_mq_free_tag_set(&vblk->tag_set); > > > > -- > > 2.34.1 > > ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v1] virtio_blk: Fix disk deletion hang on device surprise removal 2025-05-22 2:57 ` Parav Pandit @ 2025-05-22 14:36 ` Stefan Hajnoczi 2025-05-22 14:55 ` Parav Pandit 0 siblings, 1 reply; 17+ messages in thread From: Stefan Hajnoczi @ 2025-05-22 14:36 UTC (permalink / raw) To: Parav Pandit Cc: Stefan Hajnoczi, mst@redhat.com, axboe@kernel.dk, virtualization@lists.linux.dev, linux-block@vger.kernel.org, stable@vger.kernel.org, NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni, xuanzhuo@linux.alibaba.com, pbonzini@redhat.com, jasowang@redhat.com, Max Gurtovoy, Israel Rukshin On Wed, May 21, 2025 at 10:57 PM Parav Pandit <parav@nvidia.com> wrote: > > From: Stefan Hajnoczi <stefanha@redhat.com> > > Sent: Wednesday, May 21, 2025 8:27 PM > > > > On Wed, May 21, 2025 at 06:37:41AM +0000, Parav Pandit wrote: > > > When the PCI device is surprise removed, requests may not complete the > > > device as the VQ is marked as broken. Due to this, the disk deletion > > > hangs. > > > > > > Fix it by aborting the requests when the VQ is broken. > > > > > > With this fix now fio completes swiftly. > > > An alternative of IO timeout has been considered, however when the > > > driver knows about unresponsive block device, swiftly clearing them > > > enables users and upper layers to react quickly. > > > > > > Verified with multiple device unplug iterations with pending requests > > > in virtio used ring and some pending with the device. > > > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio > > > pci device") > > > Cc: stable@vger.kernel.org > > > Reported-by: lirongqing@baidu.com > > > Closes: > > > https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b474 > > > 1@baidu.com/ > > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com> > > > Reviewed-by: Israel Rukshin <israelr@nvidia.com> > > > Signed-off-by: Parav Pandit <parav@nvidia.com> > > > --- > > > changelog: > > > v0->v1: > > > - Fixed comments from Stefan to rename a cleanup function > > > - Improved logic for handling any outstanding requests > > > in bio layer > > > - improved cancel callback to sync with ongoing done() > > > > > > --- > > > drivers/block/virtio_blk.c | 95 > > > ++++++++++++++++++++++++++++++++++++++ > > > 1 file changed, 95 insertions(+) > > > > > > diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c > > > index 7cffea01d868..5212afdbd3c7 100644 > > > --- a/drivers/block/virtio_blk.c > > > +++ b/drivers/block/virtio_blk.c > > > @@ -435,6 +435,13 @@ static blk_status_t virtio_queue_rq(struct > > blk_mq_hw_ctx *hctx, > > > blk_status_t status; > > > int err; > > > > > > + /* Immediately fail all incoming requests if the vq is broken. > > > + * Once the queue is unquiesced, upper block layer flushes any > > pending > > > + * queued requests; fail them right away. > > > + */ > > > + if (unlikely(virtqueue_is_broken(vblk->vqs[qid].vq))) > > > + return BLK_STS_IOERR; > > > + > > > status = virtblk_prep_rq(hctx, vblk, req, vbr); > > > if (unlikely(status)) > > > return status; > > > @@ -508,6 +515,11 @@ static void virtio_queue_rqs(struct rq_list *rqlist) > > > while ((req = rq_list_pop(rqlist))) { > > > struct virtio_blk_vq *this_vq = get_virtio_blk_vq(req- > > >mq_hctx); > > > > > > + if (unlikely(virtqueue_is_broken(this_vq->vq))) { > > > + rq_list_add_tail(&requeue_list, req); > > > + continue; > > > + } > > > + > > > if (vq && vq != this_vq) > > > virtblk_add_req_batch(vq, &submit_list); > > > vq = this_vq; > > > @@ -1554,6 +1566,87 @@ static int virtblk_probe(struct virtio_device > > *vdev) > > > return err; > > > } > > > > > > +static bool virtblk_request_cancel(struct request *rq, void *data) { > > > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq); > > > + struct virtio_blk *vblk = data; > > > + struct virtio_blk_vq *vq; > > > + unsigned long flags; > > > + > > > + vq = &vblk->vqs[rq->mq_hctx->queue_num]; > > > + > > > + spin_lock_irqsave(&vq->lock, flags); > > > + > > > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR; > > > + if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq)) > > > + blk_mq_complete_request(rq); > > > + > > > + spin_unlock_irqrestore(&vq->lock, flags); > > > + return true; > > > +} > > > + > > > +static void virtblk_broken_device_cleanup(struct virtio_blk *vblk) { > > > + struct request_queue *q = vblk->disk->queue; > > > + > > > + if (!virtqueue_is_broken(vblk->vqs[0].vq)) > > > + return; > > > > Can a subset of virtqueues be broken? If so, then this code doesn't handle it. > On device removal all the VQs are broken. This check only uses a VQ to decide on. > In future may be more elaborate API to have virtio_dev_broken() can be added. > Prefer to keep this patch without extending many APIs given it has Fixes tag. virtblk_remove() is called not just when a PCI device is hot unplugged. For example, removing the virtio_blk kernel module or unbinding a specific virtio device instance also calls it. My concern is that virtblk_broken_device_cleanup() is only intended for the cases where all virtqueues are broken or none are broken. If just the first virtqueue is broken then it completes requests on operational virtqueues and they may still raise an interrupt. The use-after-free I'm thinking about is when virtblk_request_cancel() -> ... -> blk_mq_end_request() has been called on a virtqueue that is not broken, followed by virtblk_done() using the struct request obtained from blk_mq_rq_from_pdu(). Maybe just adding a virtqueue_is_broken() check in virtblk_request_cancel() is enough to skip requests that are still in-flight on operational virtqueues. > > > > > > + > > > + /* Start freezing the queue, so that new requests keeps waitng at > > > +the > > > > s/waitng/waiting/ > > > Ack. > > > > + * door of bio_queue_enter(). We cannot fully freeze the queue > > because > > > + * freezed queue is an empty queue and there are pending requests, > > so > > > + * only start freezing it. > > > + */ > > > + blk_freeze_queue_start(q); > > > + > > > + /* When quiescing completes, all ongoing dispatches have completed > > > + * and no new dispatch will happen towards the driver. > > > + * This ensures that later when cancel is attempted, then are not > > > + * getting processed by the queue_rq() or queue_rqs() handlers. > > > + */ > > > + blk_mq_quiesce_queue(q); > > > + > > > + /* > > > + * Synchronize with any ongoing VQ callbacks, effectively quiescing > > > + * the device and preventing it from completing further requests > > > + * to the block layer. Any outstanding, incomplete requests will be > > > + * completed by virtblk_request_cancel(). > > > + */ > > > + virtio_synchronize_cbs(vblk->vdev); > > > + > > > + /* At this point, no new requests can enter the queue_rq() and > > > + * completion routine will not complete any new requests either for > > the > > > + * broken vq. Hence, it is safe to cancel all requests which are > > > + * started. > > > + */ > > > + blk_mq_tagset_busy_iter(&vblk->tag_set, virtblk_request_cancel, > > > +vblk); > > > > Although virtio_synchronize_cbs() was called, a broken/malicious device can > > still raise IRQs. Would that lead to use-after-free or similar undefined > > behavior for requests that have been submitted to the device? > > > It shouldn't because vring_interrupt() also checks for the broken VQ before invoking the _done(). > Once the VQ is broken and even if _done() is invoked, it wont progress further on get_buf(). > And VQs are freed later in del_vq() after the device is reset as you suggested. See above about a scenario where a race can happen. > > > It seems safer to reset the device before marking the requests as failed. > > > Such addition should be avoided because when the device is surprise removed, even reset will not complete. The virtblk_remove() function modified by this patch calls virtio_reset_device(). Is the expected behavior after this patch that virtblk_remove() spins forever? > > > > + blk_mq_tagset_wait_completed_request(&vblk->tag_set); > > > + > > > + /* All pending requests are cleaned up. Time to resume so that disk > > > + * deletion can be smooth. Start the HW queues so that when queue > > is > > > + * unquiesced requests can again enter the driver. > > > + */ > > > + blk_mq_start_stopped_hw_queues(q, true); > > > + > > > + /* Unquiescing will trigger dispatching any pending requests to the > > > + * driver which has crossed bio_queue_enter() to the driver. > > > + */ > > > + blk_mq_unquiesce_queue(q); > > > + > > > + /* Wait for all pending dispatches to terminate which may have been > > > + * initiated after unquiescing. > > > + */ > > > + blk_mq_freeze_queue_wait(q); > > > + > > > + /* Mark the disk dead so that once queue unfreeze, the requests > > > + * waiting at the door of bio_queue_enter() can be aborted right > > away. > > > + */ > > > + blk_mark_disk_dead(vblk->disk); > > > + > > > + /* Unfreeze the queue so that any waiting requests will be aborted. > > */ > > > + blk_mq_unfreeze_queue_nomemrestore(q); > > > +} > > > + > > > static void virtblk_remove(struct virtio_device *vdev) { > > > struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1654,8 @@ static > > > void virtblk_remove(struct virtio_device *vdev) > > > /* Make sure no work handler is accessing the device. */ > > > flush_work(&vblk->config_work); > > > > > > + virtblk_broken_device_cleanup(vblk); > > > + > > > del_gendisk(vblk->disk); > > > blk_mq_free_tag_set(&vblk->tag_set); > > > > > > -- > > > 2.34.1 > > > > ^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH v1] virtio_blk: Fix disk deletion hang on device surprise removal 2025-05-22 14:36 ` Stefan Hajnoczi @ 2025-05-22 14:55 ` Parav Pandit 2025-05-22 18:12 ` Stefan Hajnoczi 2025-05-26 9:23 ` Parav Pandit 0 siblings, 2 replies; 17+ messages in thread From: Parav Pandit @ 2025-05-22 14:55 UTC (permalink / raw) To: Stefan Hajnoczi Cc: Stefan Hajnoczi, mst@redhat.com, axboe@kernel.dk, virtualization@lists.linux.dev, linux-block@vger.kernel.org, stable@vger.kernel.org, NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni, xuanzhuo@linux.alibaba.com, pbonzini@redhat.com, jasowang@redhat.com, Max Gurtovoy, Israel Rukshin > From: Stefan Hajnoczi <stefanha@gmail.com> > Sent: Thursday, May 22, 2025 8:06 PM > > On Wed, May 21, 2025 at 10:57 PM Parav Pandit <parav@nvidia.com> wrote: > > > From: Stefan Hajnoczi <stefanha@redhat.com> > > > Sent: Wednesday, May 21, 2025 8:27 PM > > > > > > On Wed, May 21, 2025 at 06:37:41AM +0000, Parav Pandit wrote: > > > > When the PCI device is surprise removed, requests may not complete > > > > the device as the VQ is marked as broken. Due to this, the disk > > > > deletion hangs. > > > > > > > > Fix it by aborting the requests when the VQ is broken. > > > > > > > > With this fix now fio completes swiftly. > > > > An alternative of IO timeout has been considered, however when the > > > > driver knows about unresponsive block device, swiftly clearing > > > > them enables users and upper layers to react quickly. > > > > > > > > Verified with multiple device unplug iterations with pending > > > > requests in virtio used ring and some pending with the device. > > > > > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of > > > > virtio pci device") > > > > Cc: stable@vger.kernel.org > > > > Reported-by: lirongqing@baidu.com > > > > Closes: > > > > https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9 > > > > b474 > > > > 1@baidu.com/ > > > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com> > > > > Reviewed-by: Israel Rukshin <israelr@nvidia.com> > > > > Signed-off-by: Parav Pandit <parav@nvidia.com> > > > > --- > > > > changelog: > > > > v0->v1: > > > > - Fixed comments from Stefan to rename a cleanup function > > > > - Improved logic for handling any outstanding requests > > > > in bio layer > > > > - improved cancel callback to sync with ongoing done() > > > > > > > > --- > > > > drivers/block/virtio_blk.c | 95 > > > > ++++++++++++++++++++++++++++++++++++++ > > > > 1 file changed, 95 insertions(+) > > > > > > > > diff --git a/drivers/block/virtio_blk.c > > > > b/drivers/block/virtio_blk.c index 7cffea01d868..5212afdbd3c7 > > > > 100644 > > > > --- a/drivers/block/virtio_blk.c > > > > +++ b/drivers/block/virtio_blk.c > > > > @@ -435,6 +435,13 @@ static blk_status_t virtio_queue_rq(struct > > > blk_mq_hw_ctx *hctx, > > > > blk_status_t status; > > > > int err; > > > > > > > > + /* Immediately fail all incoming requests if the vq is broken. > > > > + * Once the queue is unquiesced, upper block layer flushes any > > > pending > > > > + * queued requests; fail them right away. > > > > + */ > > > > + if (unlikely(virtqueue_is_broken(vblk->vqs[qid].vq))) > > > > + return BLK_STS_IOERR; > > > > + > > > > status = virtblk_prep_rq(hctx, vblk, req, vbr); > > > > if (unlikely(status)) > > > > return status; > > > > @@ -508,6 +515,11 @@ static void virtio_queue_rqs(struct rq_list > *rqlist) > > > > while ((req = rq_list_pop(rqlist))) { > > > > struct virtio_blk_vq *this_vq = get_virtio_blk_vq(req- > > > >mq_hctx); > > > > > > > > + if (unlikely(virtqueue_is_broken(this_vq->vq))) { > > > > + rq_list_add_tail(&requeue_list, req); > > > > + continue; > > > > + } > > > > + > > > > if (vq && vq != this_vq) > > > > virtblk_add_req_batch(vq, &submit_list); > > > > vq = this_vq; > > > > @@ -1554,6 +1566,87 @@ static int virtblk_probe(struct > > > > virtio_device > > > *vdev) > > > > return err; > > > > } > > > > > > > > +static bool virtblk_request_cancel(struct request *rq, void *data) { > > > > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq); > > > > + struct virtio_blk *vblk = data; > > > > + struct virtio_blk_vq *vq; > > > > + unsigned long flags; > > > > + > > > > + vq = &vblk->vqs[rq->mq_hctx->queue_num]; > > > > + > > > > + spin_lock_irqsave(&vq->lock, flags); > > > > + > > > > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR; > > > > + if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq)) > > > > + blk_mq_complete_request(rq); > > > > + > > > > + spin_unlock_irqrestore(&vq->lock, flags); > > > > + return true; > > > > +} > > > > + > > > > +static void virtblk_broken_device_cleanup(struct virtio_blk *vblk) { > > > > + struct request_queue *q = vblk->disk->queue; > > > > + > > > > + if (!virtqueue_is_broken(vblk->vqs[0].vq)) > > > > + return; > > > > > > Can a subset of virtqueues be broken? If so, then this code doesn't handle > it. > > On device removal all the VQs are broken. This check only uses a VQ to decide > on. > > In future may be more elaborate API to have virtio_dev_broken() can be > added. > > Prefer to keep this patch without extending many APIs given it has Fixes tag. > > virtblk_remove() is called not just when a PCI device is hot unplugged. For > example, removing the virtio_blk kernel module or unbinding a specific virtio > device instance also calls it. > This is ok. > My concern is that virtblk_broken_device_cleanup() is only intended for the > cases where all virtqueues are broken or none are broken. If just the first > virtqueue is broken then it completes requests on operational virtqueues and > they may still raise an interrupt. > I see that vq broken is extended for each reset scenario too lately in vp_modern_enable_vq_after_reset(). So yes, this patch which was intended for original surprise removal bug where vq broken was not done for reset cases. I believe for fixing the cited patch, device->broken flag should be used. Max indicated this in an internal review, but I was inclined to avoid adding many changes. And hence reuse vq broken. So one option is to extend, virtio_break_device() to have a flag like below and check during remove(). dev->broken = true; or to revert the patch, 43bb40c5b926, which Michael was not linking. > The use-after-free I'm thinking about is when virtblk_request_cancel() > -> ... -> blk_mq_end_request() has been called on a virtqueue that is > not broken, followed by virtblk_done() using the struct request obtained from > blk_mq_rq_from_pdu(). > This can happen for case when nonsurprise removal is done possibly. > Maybe just adding a virtqueue_is_broken() check in > virtblk_request_cancel() is enough to skip requests that are still in-flight on > operational virtqueues. Well, the idea of calling request_cancel() iterator only if the VQ is broken. So in regular remove() this should not be called. Existing flow is better. > > > > > > > > > > + > > > > + /* Start freezing the queue, so that new requests keeps waitng > > > > +at the > > > > > > s/waitng/waiting/ > > > > > Ack. > > > > > > + * door of bio_queue_enter(). We cannot fully freeze the queue > > > because > > > > + * freezed queue is an empty queue and there are pending > > > > + requests, > > > so > > > > + * only start freezing it. > > > > + */ > > > > + blk_freeze_queue_start(q); > > > > + > > > > + /* When quiescing completes, all ongoing dispatches have completed > > > > + * and no new dispatch will happen towards the driver. > > > > + * This ensures that later when cancel is attempted, then are not > > > > + * getting processed by the queue_rq() or queue_rqs() handlers. > > > > + */ > > > > + blk_mq_quiesce_queue(q); > > > > + > > > > + /* > > > > + * Synchronize with any ongoing VQ callbacks, effectively quiescing > > > > + * the device and preventing it from completing further requests > > > > + * to the block layer. Any outstanding, incomplete requests will be > > > > + * completed by virtblk_request_cancel(). > > > > + */ > > > > + virtio_synchronize_cbs(vblk->vdev); > > > > + > > > > + /* At this point, no new requests can enter the queue_rq() and > > > > + * completion routine will not complete any new requests > > > > + either for > > > the > > > > + * broken vq. Hence, it is safe to cancel all requests which are > > > > + * started. > > > > + */ > > > > + blk_mq_tagset_busy_iter(&vblk->tag_set, > > > > +virtblk_request_cancel, vblk); > > > > > > Although virtio_synchronize_cbs() was called, a broken/malicious > > > device can still raise IRQs. Would that lead to use-after-free or > > > similar undefined behavior for requests that have been submitted to the > device? > > > > > It shouldn't because vring_interrupt() also checks for the broken VQ before > invoking the _done(). > > Once the VQ is broken and even if _done() is invoked, it wont progress > further on get_buf(). > > And VQs are freed later in del_vq() after the device is reset as you suggested. > > See above about a scenario where a race can happen. > > > > > > It seems safer to reset the device before marking the requests as failed. > > > > > Such addition should be avoided because when the device is surprise > removed, even reset will not complete. > > The virtblk_remove() function modified by this patch calls > virtio_reset_device(). Is the expected behavior after this patch that > virtblk_remove() spins forever? If the PCI device is truly removed physically, then yes. This patch is not addressing such problem that existed even before the patch in fixes tag. I have experienced this already. Adding that support is relatively bigger change (than this fix). > > > > > > > + blk_mq_tagset_wait_completed_request(&vblk->tag_set); > > > > + > > > > + /* All pending requests are cleaned up. Time to resume so that disk > > > > + * deletion can be smooth. Start the HW queues so that when > > > > + queue > > > is > > > > + * unquiesced requests can again enter the driver. > > > > + */ > > > > + blk_mq_start_stopped_hw_queues(q, true); > > > > + > > > > + /* Unquiescing will trigger dispatching any pending requests to the > > > > + * driver which has crossed bio_queue_enter() to the driver. > > > > + */ > > > > + blk_mq_unquiesce_queue(q); > > > > + > > > > + /* Wait for all pending dispatches to terminate which may have been > > > > + * initiated after unquiescing. > > > > + */ > > > > + blk_mq_freeze_queue_wait(q); > > > > + > > > > + /* Mark the disk dead so that once queue unfreeze, the requests > > > > + * waiting at the door of bio_queue_enter() can be aborted > > > > + right > > > away. > > > > + */ > > > > + blk_mark_disk_dead(vblk->disk); > > > > + > > > > + /* Unfreeze the queue so that any waiting requests will be aborted. > > > */ > > > > + blk_mq_unfreeze_queue_nomemrestore(q); > > > > +} > > > > + > > > > static void virtblk_remove(struct virtio_device *vdev) { > > > > struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1654,8 @@ > > > > static void virtblk_remove(struct virtio_device *vdev) > > > > /* Make sure no work handler is accessing the device. */ > > > > flush_work(&vblk->config_work); > > > > > > > > + virtblk_broken_device_cleanup(vblk); > > > > + > > > > del_gendisk(vblk->disk); > > > > blk_mq_free_tag_set(&vblk->tag_set); > > > > > > > > -- > > > > 2.34.1 > > > > > > ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v1] virtio_blk: Fix disk deletion hang on device surprise removal 2025-05-22 14:55 ` Parav Pandit @ 2025-05-22 18:12 ` Stefan Hajnoczi 2025-05-22 18:58 ` Parav Pandit 2025-05-26 9:23 ` Parav Pandit 1 sibling, 1 reply; 17+ messages in thread From: Stefan Hajnoczi @ 2025-05-22 18:12 UTC (permalink / raw) To: Parav Pandit Cc: Stefan Hajnoczi, mst@redhat.com, axboe@kernel.dk, virtualization@lists.linux.dev, linux-block@vger.kernel.org, stable@vger.kernel.org, NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni, xuanzhuo@linux.alibaba.com, pbonzini@redhat.com, jasowang@redhat.com, Max Gurtovoy, Israel Rukshin On Thu, May 22, 2025 at 10:56 AM Parav Pandit <parav@nvidia.com> wrote: > > > > From: Stefan Hajnoczi <stefanha@gmail.com> > > Sent: Thursday, May 22, 2025 8:06 PM > > > > On Wed, May 21, 2025 at 10:57 PM Parav Pandit <parav@nvidia.com> wrote: > > > > From: Stefan Hajnoczi <stefanha@redhat.com> > > > > Sent: Wednesday, May 21, 2025 8:27 PM > > > > > > > > On Wed, May 21, 2025 at 06:37:41AM +0000, Parav Pandit wrote: > > > > > When the PCI device is surprise removed, requests may not complete > > > > > the device as the VQ is marked as broken. Due to this, the disk > > > > > deletion hangs. > > > > > > > > > > Fix it by aborting the requests when the VQ is broken. > > > > > > > > > > With this fix now fio completes swiftly. > > > > > An alternative of IO timeout has been considered, however when the > > > > > driver knows about unresponsive block device, swiftly clearing > > > > > them enables users and upper layers to react quickly. > > > > > > > > > > Verified with multiple device unplug iterations with pending > > > > > requests in virtio used ring and some pending with the device. > > > > > > > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of > > > > > virtio pci device") > > > > > Cc: stable@vger.kernel.org > > > > > Reported-by: lirongqing@baidu.com > > > > > Closes: > > > > > https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9 > > > > > b474 > > > > > 1@baidu.com/ > > > > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com> > > > > > Reviewed-by: Israel Rukshin <israelr@nvidia.com> > > > > > Signed-off-by: Parav Pandit <parav@nvidia.com> > > > > > --- > > > > > changelog: > > > > > v0->v1: > > > > > - Fixed comments from Stefan to rename a cleanup function > > > > > - Improved logic for handling any outstanding requests > > > > > in bio layer > > > > > - improved cancel callback to sync with ongoing done() > > > > > > > > > > --- > > > > > drivers/block/virtio_blk.c | 95 > > > > > ++++++++++++++++++++++++++++++++++++++ > > > > > 1 file changed, 95 insertions(+) > > > > > > > > > > diff --git a/drivers/block/virtio_blk.c > > > > > b/drivers/block/virtio_blk.c index 7cffea01d868..5212afdbd3c7 > > > > > 100644 > > > > > --- a/drivers/block/virtio_blk.c > > > > > +++ b/drivers/block/virtio_blk.c > > > > > @@ -435,6 +435,13 @@ static blk_status_t virtio_queue_rq(struct > > > > blk_mq_hw_ctx *hctx, > > > > > blk_status_t status; > > > > > int err; > > > > > > > > > > + /* Immediately fail all incoming requests if the vq is broken. > > > > > + * Once the queue is unquiesced, upper block layer flushes any > > > > pending > > > > > + * queued requests; fail them right away. > > > > > + */ > > > > > + if (unlikely(virtqueue_is_broken(vblk->vqs[qid].vq))) > > > > > + return BLK_STS_IOERR; > > > > > + > > > > > status = virtblk_prep_rq(hctx, vblk, req, vbr); > > > > > if (unlikely(status)) > > > > > return status; > > > > > @@ -508,6 +515,11 @@ static void virtio_queue_rqs(struct rq_list > > *rqlist) > > > > > while ((req = rq_list_pop(rqlist))) { > > > > > struct virtio_blk_vq *this_vq = get_virtio_blk_vq(req- > > > > >mq_hctx); > > > > > > > > > > + if (unlikely(virtqueue_is_broken(this_vq->vq))) { > > > > > + rq_list_add_tail(&requeue_list, req); > > > > > + continue; > > > > > + } > > > > > + > > > > > if (vq && vq != this_vq) > > > > > virtblk_add_req_batch(vq, &submit_list); > > > > > vq = this_vq; > > > > > @@ -1554,6 +1566,87 @@ static int virtblk_probe(struct > > > > > virtio_device > > > > *vdev) > > > > > return err; > > > > > } > > > > > > > > > > +static bool virtblk_request_cancel(struct request *rq, void *data) { > > > > > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq); > > > > > + struct virtio_blk *vblk = data; > > > > > + struct virtio_blk_vq *vq; > > > > > + unsigned long flags; > > > > > + > > > > > + vq = &vblk->vqs[rq->mq_hctx->queue_num]; > > > > > + > > > > > + spin_lock_irqsave(&vq->lock, flags); > > > > > + > > > > > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR; > > > > > + if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq)) > > > > > + blk_mq_complete_request(rq); > > > > > + > > > > > + spin_unlock_irqrestore(&vq->lock, flags); > > > > > + return true; > > > > > +} > > > > > + > > > > > +static void virtblk_broken_device_cleanup(struct virtio_blk *vblk) { > > > > > + struct request_queue *q = vblk->disk->queue; > > > > > + > > > > > + if (!virtqueue_is_broken(vblk->vqs[0].vq)) > > > > > + return; > > > > > > > > Can a subset of virtqueues be broken? If so, then this code doesn't handle > > it. > > > On device removal all the VQs are broken. This check only uses a VQ to decide > > on. > > > In future may be more elaborate API to have virtio_dev_broken() can be > > added. > > > Prefer to keep this patch without extending many APIs given it has Fixes tag. > > > > virtblk_remove() is called not just when a PCI device is hot unplugged. For > > example, removing the virtio_blk kernel module or unbinding a specific virtio > > device instance also calls it. > > > This is ok. > > > My concern is that virtblk_broken_device_cleanup() is only intended for the > > cases where all virtqueues are broken or none are broken. If just the first > > virtqueue is broken then it completes requests on operational virtqueues and > > they may still raise an interrupt. > > > I see that vq broken is extended for each reset scenario too lately in vp_modern_enable_vq_after_reset(). > So yes, this patch which was intended for original surprise removal bug where vq broken was not done for reset cases. > > I believe for fixing the cited patch, device->broken flag should be used. > Max indicated this in an internal review, but I was inclined to avoid adding many changes. > And hence reuse vq broken. > > So one option is to extend, > > virtio_break_device() to have a flag like below and check during remove(). > dev->broken = true; > > or to revert the patch, 43bb40c5b926, which Michael was not linking. > > > The use-after-free I'm thinking about is when virtblk_request_cancel() > > -> ... -> blk_mq_end_request() has been called on a virtqueue that is > > not broken, followed by virtblk_done() using the struct request obtained from > > blk_mq_rq_from_pdu(). > > > This can happen for case when nonsurprise removal is done possibly. > > > Maybe just adding a virtqueue_is_broken() check in > > virtblk_request_cancel() is enough to skip requests that are still in-flight on > > operational virtqueues. > Well, the idea of calling request_cancel() iterator only if the VQ is broken. > So in regular remove() this should not be called. Existing flow is better. > > > > > > > > > > > > > > > + > > > > > + /* Start freezing the queue, so that new requests keeps waitng > > > > > +at the > > > > > > > > s/waitng/waiting/ > > > > > > > Ack. > > > > > > > > + * door of bio_queue_enter(). We cannot fully freeze the queue > > > > because > > > > > + * freezed queue is an empty queue and there are pending > > > > > + requests, > > > > so > > > > > + * only start freezing it. > > > > > + */ > > > > > + blk_freeze_queue_start(q); > > > > > + > > > > > + /* When quiescing completes, all ongoing dispatches have completed > > > > > + * and no new dispatch will happen towards the driver. > > > > > + * This ensures that later when cancel is attempted, then are not > > > > > + * getting processed by the queue_rq() or queue_rqs() handlers. > > > > > + */ > > > > > + blk_mq_quiesce_queue(q); > > > > > + > > > > > + /* > > > > > + * Synchronize with any ongoing VQ callbacks, effectively quiescing > > > > > + * the device and preventing it from completing further requests > > > > > + * to the block layer. Any outstanding, incomplete requests will be > > > > > + * completed by virtblk_request_cancel(). > > > > > + */ > > > > > + virtio_synchronize_cbs(vblk->vdev); > > > > > + > > > > > + /* At this point, no new requests can enter the queue_rq() and > > > > > + * completion routine will not complete any new requests > > > > > + either for > > > > the > > > > > + * broken vq. Hence, it is safe to cancel all requests which are > > > > > + * started. > > > > > + */ > > > > > + blk_mq_tagset_busy_iter(&vblk->tag_set, > > > > > +virtblk_request_cancel, vblk); > > > > > > > > Although virtio_synchronize_cbs() was called, a broken/malicious > > > > device can still raise IRQs. Would that lead to use-after-free or > > > > similar undefined behavior for requests that have been submitted to the > > device? > > > > > > > It shouldn't because vring_interrupt() also checks for the broken VQ before > > invoking the _done(). > > > Once the VQ is broken and even if _done() is invoked, it wont progress > > further on get_buf(). > > > And VQs are freed later in del_vq() after the device is reset as you suggested. > > > > See above about a scenario where a race can happen. > > > > > > > > > It seems safer to reset the device before marking the requests as failed. > > > > > > > Such addition should be avoided because when the device is surprise > > removed, even reset will not complete. > > > > The virtblk_remove() function modified by this patch calls > > virtio_reset_device(). Is the expected behavior after this patch that > > virtblk_remove() spins forever? > If the PCI device is truly removed physically, then yes. > This patch is not addressing such problem that existed even before the patch in fixes tag. > > I have experienced this already. Adding that support is relatively bigger change (than this fix). Perhaps a full solution rather than a partial solution would end up being simpler and cleaner. Instead of cutting out a special code path for the virtio-blk PCI surprise unplug case, tackling how the core virtio subsystem should handle PCI surprise unplug may give virtio_blk.c more helpful virtio APIs that make it less complex. It's up to you. Stefan > > > > > > > > > > > + blk_mq_tagset_wait_completed_request(&vblk->tag_set); > > > > > + > > > > > + /* All pending requests are cleaned up. Time to resume so that disk > > > > > + * deletion can be smooth. Start the HW queues so that when > > > > > + queue > > > > is > > > > > + * unquiesced requests can again enter the driver. > > > > > + */ > > > > > + blk_mq_start_stopped_hw_queues(q, true); > > > > > + > > > > > + /* Unquiescing will trigger dispatching any pending requests to the > > > > > + * driver which has crossed bio_queue_enter() to the driver. > > > > > + */ > > > > > + blk_mq_unquiesce_queue(q); > > > > > + > > > > > + /* Wait for all pending dispatches to terminate which may have been > > > > > + * initiated after unquiescing. > > > > > + */ > > > > > + blk_mq_freeze_queue_wait(q); > > > > > + > > > > > + /* Mark the disk dead so that once queue unfreeze, the requests > > > > > + * waiting at the door of bio_queue_enter() can be aborted > > > > > + right > > > > away. > > > > > + */ > > > > > + blk_mark_disk_dead(vblk->disk); > > > > > + > > > > > + /* Unfreeze the queue so that any waiting requests will be aborted. > > > > */ > > > > > + blk_mq_unfreeze_queue_nomemrestore(q); > > > > > +} > > > > > + > > > > > static void virtblk_remove(struct virtio_device *vdev) { > > > > > struct virtio_blk *vblk = vdev->priv; @@ -1561,6 +1654,8 @@ > > > > > static void virtblk_remove(struct virtio_device *vdev) > > > > > /* Make sure no work handler is accessing the device. */ > > > > > flush_work(&vblk->config_work); > > > > > > > > > > + virtblk_broken_device_cleanup(vblk); > > > > > + > > > > > del_gendisk(vblk->disk); > > > > > blk_mq_free_tag_set(&vblk->tag_set); > > > > > > > > > > -- > > > > > 2.34.1 > > > > > > > > ^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH v1] virtio_blk: Fix disk deletion hang on device surprise removal 2025-05-22 18:12 ` Stefan Hajnoczi @ 2025-05-22 18:58 ` Parav Pandit 0 siblings, 0 replies; 17+ messages in thread From: Parav Pandit @ 2025-05-22 18:58 UTC (permalink / raw) To: Stefan Hajnoczi Cc: Stefan Hajnoczi, mst@redhat.com, axboe@kernel.dk, virtualization@lists.linux.dev, linux-block@vger.kernel.org, stable@vger.kernel.org, NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni, xuanzhuo@linux.alibaba.com, pbonzini@redhat.com, jasowang@redhat.com, Max Gurtovoy, Israel Rukshin > From: Stefan Hajnoczi <stefanha@gmail.com> > Sent: Thursday, May 22, 2025 11:43 PM > > On Thu, May 22, 2025 at 10:56 AM Parav Pandit <parav@nvidia.com> wrote: > > > > > > > From: Stefan Hajnoczi <stefanha@gmail.com> > > > Sent: Thursday, May 22, 2025 8:06 PM > > > > > > On Wed, May 21, 2025 at 10:57 PM Parav Pandit <parav@nvidia.com> > wrote: > > > > > From: Stefan Hajnoczi <stefanha@redhat.com> > > > > > Sent: Wednesday, May 21, 2025 8:27 PM > > > > > > > > > > On Wed, May 21, 2025 at 06:37:41AM +0000, Parav Pandit wrote: > > > > > > When the PCI device is surprise removed, requests may not > > > > > > complete the device as the VQ is marked as broken. Due to > > > > > > this, the disk deletion hangs. > > > > > > > > > > > > Fix it by aborting the requests when the VQ is broken. > > > > > > > > > > > > With this fix now fio completes swiftly. > > > > > > An alternative of IO timeout has been considered, however when > > > > > > the driver knows about unresponsive block device, swiftly > > > > > > clearing them enables users and upper layers to react quickly. > > > > > > > > > > > > Verified with multiple device unplug iterations with pending > > > > > > requests in virtio used ring and some pending with the device. > > > > > > > > > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of > > > > > > virtio pci device") > > > > > > Cc: stable@vger.kernel.org > > > > > > Reported-by: lirongqing@baidu.com > > > > > > Closes: > > > > > > https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb7 > > > > > > 3ca9 > > > > > > b474 > > > > > > 1@baidu.com/ > > > > > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com> > > > > > > Reviewed-by: Israel Rukshin <israelr@nvidia.com> > > > > > > Signed-off-by: Parav Pandit <parav@nvidia.com> > > > > > > --- > > > > > > changelog: > > > > > > v0->v1: > > > > > > - Fixed comments from Stefan to rename a cleanup function > > > > > > - Improved logic for handling any outstanding requests > > > > > > in bio layer > > > > > > - improved cancel callback to sync with ongoing done() > > > > > > > > > > > > --- > > > > > > drivers/block/virtio_blk.c | 95 > > > > > > ++++++++++++++++++++++++++++++++++++++ > > > > > > 1 file changed, 95 insertions(+) > > > > > > > > > > > > diff --git a/drivers/block/virtio_blk.c > > > > > > b/drivers/block/virtio_blk.c index 7cffea01d868..5212afdbd3c7 > > > > > > 100644 > > > > > > --- a/drivers/block/virtio_blk.c > > > > > > +++ b/drivers/block/virtio_blk.c > > > > > > @@ -435,6 +435,13 @@ static blk_status_t > > > > > > virtio_queue_rq(struct > > > > > blk_mq_hw_ctx *hctx, > > > > > > blk_status_t status; > > > > > > int err; > > > > > > > > > > > > + /* Immediately fail all incoming requests if the vq is broken. > > > > > > + * Once the queue is unquiesced, upper block layer flushes > > > > > > + any > > > > > pending > > > > > > + * queued requests; fail them right away. > > > > > > + */ > > > > > > + if (unlikely(virtqueue_is_broken(vblk->vqs[qid].vq))) > > > > > > + return BLK_STS_IOERR; > > > > > > + > > > > > > status = virtblk_prep_rq(hctx, vblk, req, vbr); > > > > > > if (unlikely(status)) > > > > > > return status; > > > > > > @@ -508,6 +515,11 @@ static void virtio_queue_rqs(struct > > > > > > rq_list > > > *rqlist) > > > > > > while ((req = rq_list_pop(rqlist))) { > > > > > > struct virtio_blk_vq *this_vq = > > > > > >get_virtio_blk_vq(req- mq_hctx); > > > > > > > > > > > > + if (unlikely(virtqueue_is_broken(this_vq->vq))) { > > > > > > + rq_list_add_tail(&requeue_list, req); > > > > > > + continue; > > > > > > + } > > > > > > + > > > > > > if (vq && vq != this_vq) > > > > > > virtblk_add_req_batch(vq, &submit_list); > > > > > > vq = this_vq; > > > > > > @@ -1554,6 +1566,87 @@ static int virtblk_probe(struct > > > > > > virtio_device > > > > > *vdev) > > > > > > return err; > > > > > > } > > > > > > > > > > > > +static bool virtblk_request_cancel(struct request *rq, void *data) { > > > > > > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq); > > > > > > + struct virtio_blk *vblk = data; > > > > > > + struct virtio_blk_vq *vq; > > > > > > + unsigned long flags; > > > > > > + > > > > > > + vq = &vblk->vqs[rq->mq_hctx->queue_num]; > > > > > > + > > > > > > + spin_lock_irqsave(&vq->lock, flags); > > > > > > + > > > > > > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR; > > > > > > + if (blk_mq_request_started(rq) && > !blk_mq_request_completed(rq)) > > > > > > + blk_mq_complete_request(rq); > > > > > > + > > > > > > + spin_unlock_irqrestore(&vq->lock, flags); > > > > > > + return true; > > > > > > +} > > > > > > + > > > > > > +static void virtblk_broken_device_cleanup(struct virtio_blk *vblk) { > > > > > > + struct request_queue *q = vblk->disk->queue; > > > > > > + > > > > > > + if (!virtqueue_is_broken(vblk->vqs[0].vq)) > > > > > > + return; > > > > > > > > > > Can a subset of virtqueues be broken? If so, then this code > > > > > doesn't handle > > > it. > > > > On device removal all the VQs are broken. This check only uses a > > > > VQ to decide > > > on. > > > > In future may be more elaborate API to have virtio_dev_broken() > > > > can be > > > added. > > > > Prefer to keep this patch without extending many APIs given it has Fixes > tag. > > > > > > virtblk_remove() is called not just when a PCI device is hot > > > unplugged. For example, removing the virtio_blk kernel module or > > > unbinding a specific virtio device instance also calls it. > > > > > This is ok. > > > > > My concern is that virtblk_broken_device_cleanup() is only intended > > > for the cases where all virtqueues are broken or none are broken. If > > > just the first virtqueue is broken then it completes requests on > > > operational virtqueues and they may still raise an interrupt. > > > > > I see that vq broken is extended for each reset scenario too lately in > vp_modern_enable_vq_after_reset(). > > So yes, this patch which was intended for original surprise removal bug > where vq broken was not done for reset cases. > > > > I believe for fixing the cited patch, device->broken flag should be used. > > Max indicated this in an internal review, but I was inclined to avoid adding > many changes. > > And hence reuse vq broken. > > > > So one option is to extend, > > > > virtio_break_device() to have a flag like below and check during remove(). > > dev->broken = true; > > > > or to revert the patch, 43bb40c5b926, which Michael was not linking. > > > > > The use-after-free I'm thinking about is when > > > virtblk_request_cancel() > > > -> ... -> blk_mq_end_request() has been called on a virtqueue that > > > -> is > > > not broken, followed by virtblk_done() using the struct request > > > obtained from blk_mq_rq_from_pdu(). > > > > > This can happen for case when nonsurprise removal is done possibly. > > > > > Maybe just adding a virtqueue_is_broken() check in > > > virtblk_request_cancel() is enough to skip requests that are still > > > in-flight on operational virtqueues. > > Well, the idea of calling request_cancel() iterator only if the VQ is broken. > > So in regular remove() this should not be called. Existing flow is better. > > > > > > > > > > > > > > > > > > > > + > > > > > > + /* Start freezing the queue, so that new requests keeps > > > > > > +waitng at the > > > > > > > > > > s/waitng/waiting/ > > > > > > > > > Ack. > > > > > > > > > > + * door of bio_queue_enter(). We cannot fully freeze the > > > > > > + queue > > > > > because > > > > > > + * freezed queue is an empty queue and there are pending > > > > > > + requests, > > > > > so > > > > > > + * only start freezing it. > > > > > > + */ > > > > > > + blk_freeze_queue_start(q); > > > > > > + > > > > > > + /* When quiescing completes, all ongoing dispatches have > completed > > > > > > + * and no new dispatch will happen towards the driver. > > > > > > + * This ensures that later when cancel is attempted, then are not > > > > > > + * getting processed by the queue_rq() or queue_rqs() handlers. > > > > > > + */ > > > > > > + blk_mq_quiesce_queue(q); > > > > > > + > > > > > > + /* > > > > > > + * Synchronize with any ongoing VQ callbacks, effectively > quiescing > > > > > > + * the device and preventing it from completing further requests > > > > > > + * to the block layer. Any outstanding, incomplete requests will > be > > > > > > + * completed by virtblk_request_cancel(). > > > > > > + */ > > > > > > + virtio_synchronize_cbs(vblk->vdev); > > > > > > + > > > > > > + /* At this point, no new requests can enter the queue_rq() and > > > > > > + * completion routine will not complete any new requests > > > > > > + either for > > > > > the > > > > > > + * broken vq. Hence, it is safe to cancel all requests which are > > > > > > + * started. > > > > > > + */ > > > > > > + blk_mq_tagset_busy_iter(&vblk->tag_set, > > > > > > +virtblk_request_cancel, vblk); > > > > > > > > > > Although virtio_synchronize_cbs() was called, a broken/malicious > > > > > device can still raise IRQs. Would that lead to use-after-free > > > > > or similar undefined behavior for requests that have been > > > > > submitted to the > > > device? > > > > > > > > > It shouldn't because vring_interrupt() also checks for the broken > > > > VQ before > > > invoking the _done(). > > > > Once the VQ is broken and even if _done() is invoked, it wont > > > > progress > > > further on get_buf(). > > > > And VQs are freed later in del_vq() after the device is reset as you > suggested. > > > > > > See above about a scenario where a race can happen. > > > > > > > > > > > > It seems safer to reset the device before marking the requests as > failed. > > > > > > > > > Such addition should be avoided because when the device is > > > > surprise > > > removed, even reset will not complete. > > > > > > The virtblk_remove() function modified by this patch calls > > > virtio_reset_device(). Is the expected behavior after this patch > > > that > > > virtblk_remove() spins forever? > > If the PCI device is truly removed physically, then yes. > > This patch is not addressing such problem that existed even before the > patch in fixes tag. > > > > I have experienced this already. Adding that support is relatively bigger > change (than this fix). > > Perhaps a full solution rather than a partial solution would end up being > simpler and cleaner. Instead of cutting out a special code path for the virtio- > blk PCI surprise unplug case, tackling how the core virtio subsystem should > handle PCI surprise unplug may give virtio_blk.c more helpful virtio APIs that > make it less complex. It's up to you. > Each upper layer is different in how to quiesce it. so not sure how that code can be generalized in lower layers. But I agree that a at minimum common PCI layer signaling layer indicating device surprise removal is useful. For the full solution it appeared lot more changes needed likely on top of this. So present objective is restore the regression caused by the commit in fixes tag. ^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH v1] virtio_blk: Fix disk deletion hang on device surprise removal 2025-05-22 14:55 ` Parav Pandit 2025-05-22 18:12 ` Stefan Hajnoczi @ 2025-05-26 9:23 ` Parav Pandit 2025-05-26 13:29 ` Stefan Hajnoczi 1 sibling, 1 reply; 17+ messages in thread From: Parav Pandit @ 2025-05-26 9:23 UTC (permalink / raw) To: Stefan Hajnoczi Cc: Stefan Hajnoczi, mst@redhat.com, axboe@kernel.dk, virtualization@lists.linux.dev, linux-block@vger.kernel.org, stable@vger.kernel.org, NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni, xuanzhuo@linux.alibaba.com, pbonzini@redhat.com, jasowang@redhat.com, Max Gurtovoy, Israel Rukshin Hi Stefan, > From: Parav Pandit <parav@nvidia.com> > Sent: Thursday, May 22, 2025 8:26 PM > > > > From: Stefan Hajnoczi <stefanha@gmail.com> > > Sent: Thursday, May 22, 2025 8:06 PM > > > > On Wed, May 21, 2025 at 10:57 PM Parav Pandit <parav@nvidia.com> > wrote: > > > > From: Stefan Hajnoczi <stefanha@redhat.com> > > > > Sent: Wednesday, May 21, 2025 8:27 PM > > > > > > > > On Wed, May 21, 2025 at 06:37:41AM +0000, Parav Pandit wrote: > > > > > When the PCI device is surprise removed, requests may not > > > > > complete the device as the VQ is marked as broken. Due to this, > > > > > the disk deletion hangs. > > > > > > > > > > Fix it by aborting the requests when the VQ is broken. > > > > > > > > > > With this fix now fio completes swiftly. > > > > > An alternative of IO timeout has been considered, however when > > > > > the driver knows about unresponsive block device, swiftly > > > > > clearing them enables users and upper layers to react quickly. > > > > > > > > > > Verified with multiple device unplug iterations with pending > > > > > requests in virtio used ring and some pending with the device. > > > > > > > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of > > > > > virtio pci device") > > > > > Cc: stable@vger.kernel.org > > > > > Reported-by: lirongqing@baidu.com > > > > > Closes: > > > > > https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73c > > > > > a9 > > > > > b474 > > > > > 1@baidu.com/ > > > > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com> > > > > > Reviewed-by: Israel Rukshin <israelr@nvidia.com> > > > > > Signed-off-by: Parav Pandit <parav@nvidia.com> > > > > > --- > > > > > changelog: > > > > > v0->v1: > > > > > - Fixed comments from Stefan to rename a cleanup function > > > > > - Improved logic for handling any outstanding requests > > > > > in bio layer > > > > > - improved cancel callback to sync with ongoing done() > > > > > > > > > > --- > > > > > drivers/block/virtio_blk.c | 95 > > > > > ++++++++++++++++++++++++++++++++++++++ > > > > > 1 file changed, 95 insertions(+) > > > > > > > > > > diff --git a/drivers/block/virtio_blk.c > > > > > b/drivers/block/virtio_blk.c index 7cffea01d868..5212afdbd3c7 > > > > > 100644 > > > > > --- a/drivers/block/virtio_blk.c > > > > > +++ b/drivers/block/virtio_blk.c > > > > > @@ -435,6 +435,13 @@ static blk_status_t virtio_queue_rq(struct > > > > blk_mq_hw_ctx *hctx, > > > > > blk_status_t status; > > > > > int err; > > > > > > > > > > + /* Immediately fail all incoming requests if the vq is broken. > > > > > + * Once the queue is unquiesced, upper block layer flushes > > > > > + any > > > > pending > > > > > + * queued requests; fail them right away. > > > > > + */ > > > > > + if (unlikely(virtqueue_is_broken(vblk->vqs[qid].vq))) > > > > > + return BLK_STS_IOERR; > > > > > + > > > > > status = virtblk_prep_rq(hctx, vblk, req, vbr); > > > > > if (unlikely(status)) > > > > > return status; > > > > > @@ -508,6 +515,11 @@ static void virtio_queue_rqs(struct rq_list > > *rqlist) > > > > > while ((req = rq_list_pop(rqlist))) { > > > > > struct virtio_blk_vq *this_vq = > > > > >get_virtio_blk_vq(req- mq_hctx); > > > > > > > > > > + if (unlikely(virtqueue_is_broken(this_vq->vq))) { > > > > > + rq_list_add_tail(&requeue_list, req); > > > > > + continue; > > > > > + } > > > > > + > > > > > if (vq && vq != this_vq) > > > > > virtblk_add_req_batch(vq, &submit_list); > > > > > vq = this_vq; > > > > > @@ -1554,6 +1566,87 @@ static int virtblk_probe(struct > > > > > virtio_device > > > > *vdev) > > > > > return err; > > > > > } > > > > > > > > > > +static bool virtblk_request_cancel(struct request *rq, void *data) { > > > > > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq); > > > > > + struct virtio_blk *vblk = data; > > > > > + struct virtio_blk_vq *vq; > > > > > + unsigned long flags; > > > > > + > > > > > + vq = &vblk->vqs[rq->mq_hctx->queue_num]; > > > > > + > > > > > + spin_lock_irqsave(&vq->lock, flags); > > > > > + > > > > > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR; > > > > > + if (blk_mq_request_started(rq) && > !blk_mq_request_completed(rq)) > > > > > + blk_mq_complete_request(rq); > > > > > + > > > > > + spin_unlock_irqrestore(&vq->lock, flags); > > > > > + return true; > > > > > +} > > > > > + > > > > > +static void virtblk_broken_device_cleanup(struct virtio_blk *vblk) { > > > > > + struct request_queue *q = vblk->disk->queue; > > > > > + > > > > > + if (!virtqueue_is_broken(vblk->vqs[0].vq)) > > > > > + return; > > > > > > > > Can a subset of virtqueues be broken? If so, then this code > > > > doesn't handle > > it. > > > On device removal all the VQs are broken. This check only uses a VQ > > > to decide > > on. > > > In future may be more elaborate API to have virtio_dev_broken() can > > > be > > added. > > > Prefer to keep this patch without extending many APIs given it has Fixes > tag. > > > > virtblk_remove() is called not just when a PCI device is hot > > unplugged. For example, removing the virtio_blk kernel module or > > unbinding a specific virtio device instance also calls it. > > > This is ok. > > > My concern is that virtblk_broken_device_cleanup() is only intended > > for the cases where all virtqueues are broken or none are broken. If > > just the first virtqueue is broken then it completes requests on > > operational virtqueues and they may still raise an interrupt. > > > I see that vq broken is extended for each reset scenario too lately in > vp_modern_enable_vq_after_reset(). > So yes, this patch which was intended for original surprise removal bug where > vq broken was not done for reset cases. > > I believe for fixing the cited patch, device->broken flag should be used. > Max indicated this in an internal review, but I was inclined to avoid adding > many changes. > And hence reuse vq broken. > > So one option is to extend, > > virtio_break_device() to have a flag like below and check during remove(). > dev->broken = true; > I dig further. VQ resize is the only user of dynamically break-unbreak a VQ; and specific only to vnet device. So vq[0].broken check in this patch is sufficient in this proposed function without adding above dev->broken check. If no further comments, I would like to post v2 addressing your and Michael's inputs. Please let me know. > or to revert the patch, 43bb40c5b926, which Michael was not linking. > ^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v1] virtio_blk: Fix disk deletion hang on device surprise removal 2025-05-26 9:23 ` Parav Pandit @ 2025-05-26 13:29 ` Stefan Hajnoczi 0 siblings, 0 replies; 17+ messages in thread From: Stefan Hajnoczi @ 2025-05-26 13:29 UTC (permalink / raw) To: Parav Pandit Cc: Stefan Hajnoczi, mst@redhat.com, axboe@kernel.dk, virtualization@lists.linux.dev, linux-block@vger.kernel.org, stable@vger.kernel.org, NBU-Contact-Li Rongqing (EXTERNAL), Chaitanya Kulkarni, xuanzhuo@linux.alibaba.com, pbonzini@redhat.com, jasowang@redhat.com, Max Gurtovoy, Israel Rukshin On Mon, May 26, 2025 at 5:23 AM Parav Pandit <parav@nvidia.com> wrote: > > Hi Stefan, > > > From: Parav Pandit <parav@nvidia.com> > > Sent: Thursday, May 22, 2025 8:26 PM > > > > > > > From: Stefan Hajnoczi <stefanha@gmail.com> > > > Sent: Thursday, May 22, 2025 8:06 PM > > > > > > On Wed, May 21, 2025 at 10:57 PM Parav Pandit <parav@nvidia.com> > > wrote: > > > > > From: Stefan Hajnoczi <stefanha@redhat.com> > > > > > Sent: Wednesday, May 21, 2025 8:27 PM > > > > > > > > > > On Wed, May 21, 2025 at 06:37:41AM +0000, Parav Pandit wrote: > > > > > > When the PCI device is surprise removed, requests may not > > > > > > complete the device as the VQ is marked as broken. Due to this, > > > > > > the disk deletion hangs. > > > > > > > > > > > > Fix it by aborting the requests when the VQ is broken. > > > > > > > > > > > > With this fix now fio completes swiftly. > > > > > > An alternative of IO timeout has been considered, however when > > > > > > the driver knows about unresponsive block device, swiftly > > > > > > clearing them enables users and upper layers to react quickly. > > > > > > > > > > > > Verified with multiple device unplug iterations with pending > > > > > > requests in virtio used ring and some pending with the device. > > > > > > > > > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of > > > > > > virtio pci device") > > > > > > Cc: stable@vger.kernel.org > > > > > > Reported-by: lirongqing@baidu.com > > > > > > Closes: > > > > > > https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73c > > > > > > a9 > > > > > > b474 > > > > > > 1@baidu.com/ > > > > > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com> > > > > > > Reviewed-by: Israel Rukshin <israelr@nvidia.com> > > > > > > Signed-off-by: Parav Pandit <parav@nvidia.com> > > > > > > --- > > > > > > changelog: > > > > > > v0->v1: > > > > > > - Fixed comments from Stefan to rename a cleanup function > > > > > > - Improved logic for handling any outstanding requests > > > > > > in bio layer > > > > > > - improved cancel callback to sync with ongoing done() > > > > > > > > > > > > --- > > > > > > drivers/block/virtio_blk.c | 95 > > > > > > ++++++++++++++++++++++++++++++++++++++ > > > > > > 1 file changed, 95 insertions(+) > > > > > > > > > > > > diff --git a/drivers/block/virtio_blk.c > > > > > > b/drivers/block/virtio_blk.c index 7cffea01d868..5212afdbd3c7 > > > > > > 100644 > > > > > > --- a/drivers/block/virtio_blk.c > > > > > > +++ b/drivers/block/virtio_blk.c > > > > > > @@ -435,6 +435,13 @@ static blk_status_t virtio_queue_rq(struct > > > > > blk_mq_hw_ctx *hctx, > > > > > > blk_status_t status; > > > > > > int err; > > > > > > > > > > > > + /* Immediately fail all incoming requests if the vq is broken. > > > > > > + * Once the queue is unquiesced, upper block layer flushes > > > > > > + any > > > > > pending > > > > > > + * queued requests; fail them right away. > > > > > > + */ > > > > > > + if (unlikely(virtqueue_is_broken(vblk->vqs[qid].vq))) > > > > > > + return BLK_STS_IOERR; > > > > > > + > > > > > > status = virtblk_prep_rq(hctx, vblk, req, vbr); > > > > > > if (unlikely(status)) > > > > > > return status; > > > > > > @@ -508,6 +515,11 @@ static void virtio_queue_rqs(struct rq_list > > > *rqlist) > > > > > > while ((req = rq_list_pop(rqlist))) { > > > > > > struct virtio_blk_vq *this_vq = > > > > > >get_virtio_blk_vq(req- mq_hctx); > > > > > > > > > > > > + if (unlikely(virtqueue_is_broken(this_vq->vq))) { > > > > > > + rq_list_add_tail(&requeue_list, req); > > > > > > + continue; > > > > > > + } > > > > > > + > > > > > > if (vq && vq != this_vq) > > > > > > virtblk_add_req_batch(vq, &submit_list); > > > > > > vq = this_vq; > > > > > > @@ -1554,6 +1566,87 @@ static int virtblk_probe(struct > > > > > > virtio_device > > > > > *vdev) > > > > > > return err; > > > > > > } > > > > > > > > > > > > +static bool virtblk_request_cancel(struct request *rq, void *data) { > > > > > > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq); > > > > > > + struct virtio_blk *vblk = data; > > > > > > + struct virtio_blk_vq *vq; > > > > > > + unsigned long flags; > > > > > > + > > > > > > + vq = &vblk->vqs[rq->mq_hctx->queue_num]; > > > > > > + > > > > > > + spin_lock_irqsave(&vq->lock, flags); > > > > > > + > > > > > > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR; > > > > > > + if (blk_mq_request_started(rq) && > > !blk_mq_request_completed(rq)) > > > > > > + blk_mq_complete_request(rq); > > > > > > + > > > > > > + spin_unlock_irqrestore(&vq->lock, flags); > > > > > > + return true; > > > > > > +} > > > > > > + > > > > > > +static void virtblk_broken_device_cleanup(struct virtio_blk *vblk) { > > > > > > + struct request_queue *q = vblk->disk->queue; > > > > > > + > > > > > > + if (!virtqueue_is_broken(vblk->vqs[0].vq)) > > > > > > + return; > > > > > > > > > > Can a subset of virtqueues be broken? If so, then this code > > > > > doesn't handle > > > it. > > > > On device removal all the VQs are broken. This check only uses a VQ > > > > to decide > > > on. > > > > In future may be more elaborate API to have virtio_dev_broken() can > > > > be > > > added. > > > > Prefer to keep this patch without extending many APIs given it has Fixes > > tag. > > > > > > virtblk_remove() is called not just when a PCI device is hot > > > unplugged. For example, removing the virtio_blk kernel module or > > > unbinding a specific virtio device instance also calls it. > > > > > This is ok. > > > > > My concern is that virtblk_broken_device_cleanup() is only intended > > > for the cases where all virtqueues are broken or none are broken. If > > > just the first virtqueue is broken then it completes requests on > > > operational virtqueues and they may still raise an interrupt. > > > > > I see that vq broken is extended for each reset scenario too lately in > > vp_modern_enable_vq_after_reset(). > > So yes, this patch which was intended for original surprise removal bug where > > vq broken was not done for reset cases. > > > > I believe for fixing the cited patch, device->broken flag should be used. > > Max indicated this in an internal review, but I was inclined to avoid adding > > many changes. > > And hence reuse vq broken. > > > > So one option is to extend, > > > > virtio_break_device() to have a flag like below and check during remove(). > > dev->broken = true; > > > > I dig further. > VQ resize is the only user of dynamically break-unbreak a VQ; and specific only to vnet device. > So vq[0].broken check in this patch is sufficient in this proposed function without adding above dev->broken check. > > If no further comments, I would like to post v2 addressing your and Michael's inputs. > Please let me know. Yes, please go ahead with the next revision. Stefan ^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2025-05-26 13:29 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20250521062744.1361774-1-parav@nvidia.com>
2025-05-21 8:17 ` [PATCH v1] virtio_blk: Fix disk deletion hang on device surprise removal Michael S. Tsirkin
2025-05-21 9:14 ` Parav Pandit
2025-05-21 9:19 ` Michael S. Tsirkin
2025-05-21 9:32 ` Parav Pandit
2025-05-21 10:16 ` Michael S. Tsirkin
2025-05-21 10:34 ` Parav Pandit
2025-05-21 10:43 ` Michael S. Tsirkin
2025-05-21 12:40 ` Parav Pandit
2025-05-21 16:49 ` Michael S. Tsirkin
2025-05-21 14:56 ` Stefan Hajnoczi
2025-05-22 2:57 ` Parav Pandit
2025-05-22 14:36 ` Stefan Hajnoczi
2025-05-22 14:55 ` Parav Pandit
2025-05-22 18:12 ` Stefan Hajnoczi
2025-05-22 18:58 ` Parav Pandit
2025-05-26 9:23 ` Parav Pandit
2025-05-26 13:29 ` Stefan Hajnoczi
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox