* Re: [PATCH v2] virtio-blk: Remove BUG_ON() in virtio_queue_rq()
[not found] <20220228065720.100-1-xieyongji@bytedance.com>
@ 2022-03-01 12:59 ` Christoph Hellwig
2022-03-01 15:43 ` Michael S. Tsirkin
1 sibling, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2022-03-01 12:59 UTC (permalink / raw)
To: Xie Yongji; +Cc: axboe, linux-block, mst, virtualization, hch
Looks good,
Reviewed-by: Christoph Hellwig <hch@lst.de>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2] virtio-blk: Remove BUG_ON() in virtio_queue_rq()
[not found] <20220228065720.100-1-xieyongji@bytedance.com>
2022-03-01 12:59 ` [PATCH v2] virtio-blk: Remove BUG_ON() in virtio_queue_rq() Christoph Hellwig
@ 2022-03-01 15:43 ` Michael S. Tsirkin
[not found] ` <CACycT3uGFUjmuESUi9=Kkeg4FboVifAHD0D0gPTkEprcTP=x+g@mail.gmail.com>
[not found] ` <85e61a65-4f76-afc0-272f-3b13333349f1@nvidia.com>
1 sibling, 2 replies; 9+ messages in thread
From: Michael S. Tsirkin @ 2022-03-01 15:43 UTC (permalink / raw)
To: Xie Yongji; +Cc: axboe, hch, linux-block, virtualization
On Mon, Feb 28, 2022 at 02:57:20PM +0800, Xie Yongji wrote:
> Currently we have a BUG_ON() to make sure the number of sg
> list does not exceed queue_max_segments() in virtio_queue_rq().
> However, the block layer uses queue_max_discard_segments()
> instead of queue_max_segments() to limit the sg list for
> discard requests. So the BUG_ON() might be triggered if
> virtio-blk device reports a larger value for max discard
> segment than queue_max_segments().
Hmm the spec does not say what should happen if max_discard_seg
exceeds seg_max. Is this the config you have in mind? how do you
create it?
> To fix it, let's simply
> remove the BUG_ON() which has become unnecessary after commit
> 02746e26c39e("virtio-blk: avoid preallocating big SGL for data").
> And the unused vblk->sg_elems can also be removed together.
>
> Fixes: 1f23816b8eb8 ("virtio_blk: add discard and write zeroes support")
> Suggested-by: Christoph Hellwig <hch@infradead.org>
> Signed-off-by: Xie Yongji <xieyongji@bytedance.com>
> ---
> drivers/block/virtio_blk.c | 10 +---------
> 1 file changed, 1 insertion(+), 9 deletions(-)
>
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index c443cd64fc9b..a43eb1813cec 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -76,9 +76,6 @@ struct virtio_blk {
> */
> refcount_t refs;
>
> - /* What host tells us, plus 2 for header & tailer. */
> - unsigned int sg_elems;
> -
> /* Ida index - used to track minor number allocations. */
> int index;
>
> @@ -322,8 +319,6 @@ static blk_status_t virtio_queue_rq(struct blk_mq_hw_ctx *hctx,
> blk_status_t status;
> int err;
>
> - BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
> -
> status = virtblk_setup_cmd(vblk->vdev, req, vbr);
> if (unlikely(status))
> return status;
> @@ -783,8 +778,6 @@ static int virtblk_probe(struct virtio_device *vdev)
> /* Prevent integer overflows and honor max vq size */
> sg_elems = min_t(u32, sg_elems, VIRTIO_BLK_MAX_SG_ELEMS - 2);
>
> - /* We need extra sg elements at head and tail. */
> - sg_elems += 2;
> vdev->priv = vblk = kmalloc(sizeof(*vblk), GFP_KERNEL);
> if (!vblk) {
> err = -ENOMEM;
> @@ -796,7 +789,6 @@ static int virtblk_probe(struct virtio_device *vdev)
> mutex_init(&vblk->vdev_mutex);
>
> vblk->vdev = vdev;
> - vblk->sg_elems = sg_elems;
>
> INIT_WORK(&vblk->config_work, virtblk_config_changed_work);
>
> @@ -853,7 +845,7 @@ static int virtblk_probe(struct virtio_device *vdev)
> set_disk_ro(vblk->disk, 1);
>
> /* We can handle whatever the host told us to handle. */
> - blk_queue_max_segments(q, vblk->sg_elems-2);
> + blk_queue_max_segments(q, sg_elems);
>
> /* No real sector limit. */
> blk_queue_max_hw_sectors(q, -1U);
> --
> 2.20.1
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2] virtio-blk: Remove BUG_ON() in virtio_queue_rq()
[not found] ` <CACycT3uGFUjmuESUi9=Kkeg4FboVifAHD0D0gPTkEprcTP=x+g@mail.gmail.com>
@ 2022-03-02 13:15 ` Michael S. Tsirkin
[not found] ` <8fa47a28-a974-4478-23b6-aea14355a315@nvidia.com>
0 siblings, 1 reply; 9+ messages in thread
From: Michael S. Tsirkin @ 2022-03-02 13:15 UTC (permalink / raw)
To: Yongji Xie; +Cc: Jens Axboe, Christoph Hellwig, linux-block, virtualization
On Wed, Mar 02, 2022 at 06:46:03PM +0800, Yongji Xie wrote:
> On Tue, Mar 1, 2022 at 11:43 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> >
> > On Mon, Feb 28, 2022 at 02:57:20PM +0800, Xie Yongji wrote:
> > > Currently we have a BUG_ON() to make sure the number of sg
> > > list does not exceed queue_max_segments() in virtio_queue_rq().
> > > However, the block layer uses queue_max_discard_segments()
> > > instead of queue_max_segments() to limit the sg list for
> > > discard requests. So the BUG_ON() might be triggered if
> > > virtio-blk device reports a larger value for max discard
> > > segment than queue_max_segments().
> >
> > Hmm the spec does not say what should happen if max_discard_seg
> > exceeds seg_max. Is this the config you have in mind? how do you
> > create it?
> >
>
> One example: the device doesn't specify the value of max_discard_seg
> in the config space, then the virtio-blk driver will use
> MAX_DISCARD_SEGMENTS (256) by default. Then we're able to trigger the
> BUG_ON() if the seg_max is less than 256.
>
> While the spec didn't say what should happen if max_discard_seg
> exceeds seg_max, it also doesn't explicitly prohibit this
> configuration. So I think we should at least not panic the kernel in
> this case.
>
> Thanks,
> Yongji
Oh that last one sounds like a bug, I think it should be
min(MAX_DISCARD_SEGMENTS, seg_max)
When max_discard_seg and seg_max both exist, that's a different question. We can
- do min(max_discard_seg, seg_max)
- fail probe
- clear the relevant feature flag
I feel we need a better plan than submitting an invalid request to device.
--
MST
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2] virtio-blk: Remove BUG_ON() in virtio_queue_rq()
[not found] ` <85e61a65-4f76-afc0-272f-3b13333349f1@nvidia.com>
@ 2022-03-02 13:17 ` Michael S. Tsirkin
[not found] ` <bd53b0dc-bef6-cd1a-ac5c-68766089a619@nvidia.com>
0 siblings, 1 reply; 9+ messages in thread
From: Michael S. Tsirkin @ 2022-03-02 13:17 UTC (permalink / raw)
To: Max Gurtovoy; +Cc: axboe, hch, virtualization, linux-block, Xie Yongji
On Wed, Mar 02, 2022 at 11:51:27AM +0200, Max Gurtovoy wrote:
>
> On 3/1/2022 5:43 PM, Michael S. Tsirkin wrote:
> > On Mon, Feb 28, 2022 at 02:57:20PM +0800, Xie Yongji wrote:
> > > Currently we have a BUG_ON() to make sure the number of sg
> > > list does not exceed queue_max_segments() in virtio_queue_rq().
> > > However, the block layer uses queue_max_discard_segments()
> > > instead of queue_max_segments() to limit the sg list for
> > > discard requests. So the BUG_ON() might be triggered if
> > > virtio-blk device reports a larger value for max discard
> > > segment than queue_max_segments().
> > Hmm the spec does not say what should happen if max_discard_seg
> > exceeds seg_max. Is this the config you have in mind? how do you
> > create it?
>
> I don't think it's hard to create it. Just change some registers in the
> device.
>
> But with the dynamic sgl allocation that I added recently, there is no
> problem with this scenario.
Well the problem is device says it can't handle such large descriptors,
I guess it works anyway, but it seems scary.
> This commit looks good to me, thanks Xie Yongji.
>
> Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
>
> > > To fix it, let's simply
> > > remove the BUG_ON() which has become unnecessary after commit
> > > 02746e26c39e("virtio-blk: avoid preallocating big SGL for data").
> > > And the unused vblk->sg_elems can also be removed together.
> > >
> > > Fixes: 1f23816b8eb8 ("virtio_blk: add discard and write zeroes support")
> > > Suggested-by: Christoph Hellwig <hch@infradead.org>
> > > Signed-off-by: Xie Yongji <xieyongji@bytedance.com>
> > > ---
> > > drivers/block/virtio_blk.c | 10 +---------
> > > 1 file changed, 1 insertion(+), 9 deletions(-)
> > >
> > > diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> > > index c443cd64fc9b..a43eb1813cec 100644
> > > --- a/drivers/block/virtio_blk.c
> > > +++ b/drivers/block/virtio_blk.c
> > > @@ -76,9 +76,6 @@ struct virtio_blk {
> > > */
> > > refcount_t refs;
> > > - /* What host tells us, plus 2 for header & tailer. */
> > > - unsigned int sg_elems;
> > > -
> > > /* Ida index - used to track minor number allocations. */
> > > int index;
> > > @@ -322,8 +319,6 @@ static blk_status_t virtio_queue_rq(struct blk_mq_hw_ctx *hctx,
> > > blk_status_t status;
> > > int err;
> > > - BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
> > > -
> > > status = virtblk_setup_cmd(vblk->vdev, req, vbr);
> > > if (unlikely(status))
> > > return status;
> > > @@ -783,8 +778,6 @@ static int virtblk_probe(struct virtio_device *vdev)
> > > /* Prevent integer overflows and honor max vq size */
> > > sg_elems = min_t(u32, sg_elems, VIRTIO_BLK_MAX_SG_ELEMS - 2);
> > > - /* We need extra sg elements at head and tail. */
> > > - sg_elems += 2;
> > > vdev->priv = vblk = kmalloc(sizeof(*vblk), GFP_KERNEL);
> > > if (!vblk) {
> > > err = -ENOMEM;
> > > @@ -796,7 +789,6 @@ static int virtblk_probe(struct virtio_device *vdev)
> > > mutex_init(&vblk->vdev_mutex);
> > > vblk->vdev = vdev;
> > > - vblk->sg_elems = sg_elems;
> > > INIT_WORK(&vblk->config_work, virtblk_config_changed_work);
> > > @@ -853,7 +845,7 @@ static int virtblk_probe(struct virtio_device *vdev)
> > > set_disk_ro(vblk->disk, 1);
> > > /* We can handle whatever the host told us to handle. */
> > > - blk_queue_max_segments(q, vblk->sg_elems-2);
> > > + blk_queue_max_segments(q, sg_elems);
> > > /* No real sector limit. */
> > > blk_queue_max_hw_sectors(q, -1U);
> > > --
> > > 2.20.1
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2] virtio-blk: Remove BUG_ON() in virtio_queue_rq()
[not found] ` <bd53b0dc-bef6-cd1a-ac5c-68766089a619@nvidia.com>
@ 2022-03-02 13:33 ` Michael S. Tsirkin
[not found] ` <808fbd57-588d-03e3-2904-513f4bdcceaf@nvidia.com>
[not found] ` <CACycT3uJFNof7UNTdrEK2dVB-W9q4VVkVWnjos6TJawSRF+EDA@mail.gmail.com>
0 siblings, 2 replies; 9+ messages in thread
From: Michael S. Tsirkin @ 2022-03-02 13:33 UTC (permalink / raw)
To: Max Gurtovoy; +Cc: axboe, hch, virtualization, linux-block, Xie Yongji
On Wed, Mar 02, 2022 at 03:24:51PM +0200, Max Gurtovoy wrote:
>
> On 3/2/2022 3:17 PM, Michael S. Tsirkin wrote:
> > On Wed, Mar 02, 2022 at 11:51:27AM +0200, Max Gurtovoy wrote:
> > > On 3/1/2022 5:43 PM, Michael S. Tsirkin wrote:
> > > > On Mon, Feb 28, 2022 at 02:57:20PM +0800, Xie Yongji wrote:
> > > > > Currently we have a BUG_ON() to make sure the number of sg
> > > > > list does not exceed queue_max_segments() in virtio_queue_rq().
> > > > > However, the block layer uses queue_max_discard_segments()
> > > > > instead of queue_max_segments() to limit the sg list for
> > > > > discard requests. So the BUG_ON() might be triggered if
> > > > > virtio-blk device reports a larger value for max discard
> > > > > segment than queue_max_segments().
> > > > Hmm the spec does not say what should happen if max_discard_seg
> > > > exceeds seg_max. Is this the config you have in mind? how do you
> > > > create it?
> > > I don't think it's hard to create it. Just change some registers in the
> > > device.
> > >
> > > But with the dynamic sgl allocation that I added recently, there is no
> > > problem with this scenario.
> > Well the problem is device says it can't handle such large descriptors,
> > I guess it works anyway, but it seems scary.
>
> I don't follow.
>
> The only problem this patch solves is when a virtio blk device reports
> larger value for max_discard_segments than max_segments.
>
No, the peroblem reported is when virtio blk device reports
max_segments < 256 but not max_discard_segments.
I would expect discard to follow max_segments restrictions then.
> Probably no such devices, but we need to be prepared.
Right, question is how to handle this.
> >
> > > This commit looks good to me, thanks Xie Yongji.
> > >
> > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> > >
> > > > > To fix it, let's simply
> > > > > remove the BUG_ON() which has become unnecessary after commit
> > > > > 02746e26c39e("virtio-blk: avoid preallocating big SGL for data").
> > > > > And the unused vblk->sg_elems can also be removed together.
> > > > >
> > > > > Fixes: 1f23816b8eb8 ("virtio_blk: add discard and write zeroes support")
> > > > > Suggested-by: Christoph Hellwig <hch@infradead.org>
> > > > > Signed-off-by: Xie Yongji <xieyongji@bytedance.com>
> > > > > ---
> > > > > drivers/block/virtio_blk.c | 10 +---------
> > > > > 1 file changed, 1 insertion(+), 9 deletions(-)
> > > > >
> > > > > diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> > > > > index c443cd64fc9b..a43eb1813cec 100644
> > > > > --- a/drivers/block/virtio_blk.c
> > > > > +++ b/drivers/block/virtio_blk.c
> > > > > @@ -76,9 +76,6 @@ struct virtio_blk {
> > > > > */
> > > > > refcount_t refs;
> > > > > - /* What host tells us, plus 2 for header & tailer. */
> > > > > - unsigned int sg_elems;
> > > > > -
> > > > > /* Ida index - used to track minor number allocations. */
> > > > > int index;
> > > > > @@ -322,8 +319,6 @@ static blk_status_t virtio_queue_rq(struct blk_mq_hw_ctx *hctx,
> > > > > blk_status_t status;
> > > > > int err;
> > > > > - BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
> > > > > -
> > > > > status = virtblk_setup_cmd(vblk->vdev, req, vbr);
> > > > > if (unlikely(status))
> > > > > return status;
> > > > > @@ -783,8 +778,6 @@ static int virtblk_probe(struct virtio_device *vdev)
> > > > > /* Prevent integer overflows and honor max vq size */
> > > > > sg_elems = min_t(u32, sg_elems, VIRTIO_BLK_MAX_SG_ELEMS - 2);
> > > > > - /* We need extra sg elements at head and tail. */
> > > > > - sg_elems += 2;
> > > > > vdev->priv = vblk = kmalloc(sizeof(*vblk), GFP_KERNEL);
> > > > > if (!vblk) {
> > > > > err = -ENOMEM;
> > > > > @@ -796,7 +789,6 @@ static int virtblk_probe(struct virtio_device *vdev)
> > > > > mutex_init(&vblk->vdev_mutex);
> > > > > vblk->vdev = vdev;
> > > > > - vblk->sg_elems = sg_elems;
> > > > > INIT_WORK(&vblk->config_work, virtblk_config_changed_work);
> > > > > @@ -853,7 +845,7 @@ static int virtblk_probe(struct virtio_device *vdev)
> > > > > set_disk_ro(vblk->disk, 1);
> > > > > /* We can handle whatever the host told us to handle. */
> > > > > - blk_queue_max_segments(q, vblk->sg_elems-2);
> > > > > + blk_queue_max_segments(q, sg_elems);
> > > > > /* No real sector limit. */
> > > > > blk_queue_max_hw_sectors(q, -1U);
> > > > > --
> > > > > 2.20.1
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2] virtio-blk: Remove BUG_ON() in virtio_queue_rq()
[not found] ` <808fbd57-588d-03e3-2904-513f4bdcceaf@nvidia.com>
@ 2022-03-02 14:15 ` Michael S. Tsirkin
[not found] ` <fe42c787-700c-d136-75b9-5a3e1b6d1b4f@nvidia.com>
0 siblings, 1 reply; 9+ messages in thread
From: Michael S. Tsirkin @ 2022-03-02 14:15 UTC (permalink / raw)
To: Max Gurtovoy; +Cc: axboe, hch, virtualization, linux-block, Xie Yongji
On Wed, Mar 02, 2022 at 03:45:10PM +0200, Max Gurtovoy wrote:
>
> On 3/2/2022 3:33 PM, Michael S. Tsirkin wrote:
> > On Wed, Mar 02, 2022 at 03:24:51PM +0200, Max Gurtovoy wrote:
> > > On 3/2/2022 3:17 PM, Michael S. Tsirkin wrote:
> > > > On Wed, Mar 02, 2022 at 11:51:27AM +0200, Max Gurtovoy wrote:
> > > > > On 3/1/2022 5:43 PM, Michael S. Tsirkin wrote:
> > > > > > On Mon, Feb 28, 2022 at 02:57:20PM +0800, Xie Yongji wrote:
> > > > > > > Currently we have a BUG_ON() to make sure the number of sg
> > > > > > > list does not exceed queue_max_segments() in virtio_queue_rq().
> > > > > > > However, the block layer uses queue_max_discard_segments()
> > > > > > > instead of queue_max_segments() to limit the sg list for
> > > > > > > discard requests. So the BUG_ON() might be triggered if
> > > > > > > virtio-blk device reports a larger value for max discard
> > > > > > > segment than queue_max_segments().
> > > > > > Hmm the spec does not say what should happen if max_discard_seg
> > > > > > exceeds seg_max. Is this the config you have in mind? how do you
> > > > > > create it?
> > > > > I don't think it's hard to create it. Just change some registers in the
> > > > > device.
> > > > >
> > > > > But with the dynamic sgl allocation that I added recently, there is no
> > > > > problem with this scenario.
> > > > Well the problem is device says it can't handle such large descriptors,
> > > > I guess it works anyway, but it seems scary.
> > > I don't follow.
> > >
> > > The only problem this patch solves is when a virtio blk device reports
> > > larger value for max_discard_segments than max_segments.
> > >
> > No, the peroblem reported is when virtio blk device reports
> > max_segments < 256 but not max_discard_segments.
>
> You mean the code will work in case device report max_discard_segments >
> max_segments ?
>
> I don't think so.
I think it's like this:
if (virtio_has_feature(vdev, VIRTIO_BLK_F_DISCARD)) {
....
virtio_cread(vdev, struct virtio_blk_config, max_discard_seg,
&v);
blk_queue_max_discard_segments(q,
min_not_zero(v,
MAX_DISCARD_SEGMENTS));
}
so, IIUC the case is of a device that sets max_discard_seg to 0.
Which is kind of broken, but we handled this since 2018 so I guess
we'll need to keep doing that.
> This is exactly what Xie Yongji mention in the commit message and what I was
> seeing.
>
> But the code will work if VIRTIO_BLK_F_DISCARD is not supported by the
> device (even if max_segments < 256) , since blk layer set
> queue_max_discard_segments = 1 in the initialization.
>
> And the virtio-blk driver won't change it unless VIRTIO_BLK_F_DISCARD is
> supported.
>
> > I would expect discard to follow max_segments restrictions then.
> >
> > > Probably no such devices, but we need to be prepared.
> > Right, question is how to handle this.
> >
> > > > > This commit looks good to me, thanks Xie Yongji.
> > > > >
> > > > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> > > > >
> > > > > > > To fix it, let's simply
> > > > > > > remove the BUG_ON() which has become unnecessary after commit
> > > > > > > 02746e26c39e("virtio-blk: avoid preallocating big SGL for data").
> > > > > > > And the unused vblk->sg_elems can also be removed together.
> > > > > > >
> > > > > > > Fixes: 1f23816b8eb8 ("virtio_blk: add discard and write zeroes support")
> > > > > > > Suggested-by: Christoph Hellwig <hch@infradead.org>
> > > > > > > Signed-off-by: Xie Yongji <xieyongji@bytedance.com>
> > > > > > > ---
> > > > > > > drivers/block/virtio_blk.c | 10 +---------
> > > > > > > 1 file changed, 1 insertion(+), 9 deletions(-)
> > > > > > >
> > > > > > > diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> > > > > > > index c443cd64fc9b..a43eb1813cec 100644
> > > > > > > --- a/drivers/block/virtio_blk.c
> > > > > > > +++ b/drivers/block/virtio_blk.c
> > > > > > > @@ -76,9 +76,6 @@ struct virtio_blk {
> > > > > > > */
> > > > > > > refcount_t refs;
> > > > > > > - /* What host tells us, plus 2 for header & tailer. */
> > > > > > > - unsigned int sg_elems;
> > > > > > > -
> > > > > > > /* Ida index - used to track minor number allocations. */
> > > > > > > int index;
> > > > > > > @@ -322,8 +319,6 @@ static blk_status_t virtio_queue_rq(struct blk_mq_hw_ctx *hctx,
> > > > > > > blk_status_t status;
> > > > > > > int err;
> > > > > > > - BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
> > > > > > > -
> > > > > > > status = virtblk_setup_cmd(vblk->vdev, req, vbr);
> > > > > > > if (unlikely(status))
> > > > > > > return status;
> > > > > > > @@ -783,8 +778,6 @@ static int virtblk_probe(struct virtio_device *vdev)
> > > > > > > /* Prevent integer overflows and honor max vq size */
> > > > > > > sg_elems = min_t(u32, sg_elems, VIRTIO_BLK_MAX_SG_ELEMS - 2);
> > > > > > > - /* We need extra sg elements at head and tail. */
> > > > > > > - sg_elems += 2;
> > > > > > > vdev->priv = vblk = kmalloc(sizeof(*vblk), GFP_KERNEL);
> > > > > > > if (!vblk) {
> > > > > > > err = -ENOMEM;
> > > > > > > @@ -796,7 +789,6 @@ static int virtblk_probe(struct virtio_device *vdev)
> > > > > > > mutex_init(&vblk->vdev_mutex);
> > > > > > > vblk->vdev = vdev;
> > > > > > > - vblk->sg_elems = sg_elems;
> > > > > > > INIT_WORK(&vblk->config_work, virtblk_config_changed_work);
> > > > > > > @@ -853,7 +845,7 @@ static int virtblk_probe(struct virtio_device *vdev)
> > > > > > > set_disk_ro(vblk->disk, 1);
> > > > > > > /* We can handle whatever the host told us to handle. */
> > > > > > > - blk_queue_max_segments(q, vblk->sg_elems-2);
> > > > > > > + blk_queue_max_segments(q, sg_elems);
> > > > > > > /* No real sector limit. */
> > > > > > > blk_queue_max_hw_sectors(q, -1U);
> > > > > > > --
> > > > > > > 2.20.1
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2] virtio-blk: Remove BUG_ON() in virtio_queue_rq()
[not found] ` <CACycT3uJFNof7UNTdrEK2dVB-W9q4VVkVWnjos6TJawSRF+EDA@mail.gmail.com>
@ 2022-03-02 14:20 ` Michael S. Tsirkin
0 siblings, 0 replies; 9+ messages in thread
From: Michael S. Tsirkin @ 2022-03-02 14:20 UTC (permalink / raw)
To: Yongji Xie
Cc: Max Gurtovoy, Jens Axboe, Christoph Hellwig, virtualization,
linux-block
On Wed, Mar 02, 2022 at 09:53:17PM +0800, Yongji Xie wrote:
> On Wed, Mar 2, 2022 at 9:33 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> >
> > On Wed, Mar 02, 2022 at 03:24:51PM +0200, Max Gurtovoy wrote:
> > >
> > > On 3/2/2022 3:17 PM, Michael S. Tsirkin wrote:
> > > > On Wed, Mar 02, 2022 at 11:51:27AM +0200, Max Gurtovoy wrote:
> > > > > On 3/1/2022 5:43 PM, Michael S. Tsirkin wrote:
> > > > > > On Mon, Feb 28, 2022 at 02:57:20PM +0800, Xie Yongji wrote:
> > > > > > > Currently we have a BUG_ON() to make sure the number of sg
> > > > > > > list does not exceed queue_max_segments() in virtio_queue_rq().
> > > > > > > However, the block layer uses queue_max_discard_segments()
> > > > > > > instead of queue_max_segments() to limit the sg list for
> > > > > > > discard requests. So the BUG_ON() might be triggered if
> > > > > > > virtio-blk device reports a larger value for max discard
> > > > > > > segment than queue_max_segments().
> > > > > > Hmm the spec does not say what should happen if max_discard_seg
> > > > > > exceeds seg_max. Is this the config you have in mind? how do you
> > > > > > create it?
> > > > > I don't think it's hard to create it. Just change some registers in the
> > > > > device.
> > > > >
> > > > > But with the dynamic sgl allocation that I added recently, there is no
> > > > > problem with this scenario.
> > > > Well the problem is device says it can't handle such large descriptors,
> > > > I guess it works anyway, but it seems scary.
> > >
> > > I don't follow.
> > >
> > > The only problem this patch solves is when a virtio blk device reports
> > > larger value for max_discard_segments than max_segments.
> > >
> >
> > No, the peroblem reported is when virtio blk device reports
> > max_segments < 256 but not max_discard_segments.
> > I would expect discard to follow max_segments restrictions then.
> >
>
> I think one point is whether we want to allow the corner case that the
> device reports a larger value for max_discard_segments than
> max_segments. For example, queue size is 256, max_segments is 128 - 2,
> max_discard_segments is 256 - 2.
>
> Thanks,
> Yongji
So if device specifies that, then I guess it's fine and from that POV
the patch is correct. But I think the issue is when device specifies 0
which we interpret as 256 with no basis in hardware.
--
MST
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2] virtio-blk: Remove BUG_ON() in virtio_queue_rq()
[not found] ` <fe42c787-700c-d136-75b9-5a3e1b6d1b4f@nvidia.com>
@ 2022-03-02 14:48 ` Michael S. Tsirkin
0 siblings, 0 replies; 9+ messages in thread
From: Michael S. Tsirkin @ 2022-03-02 14:48 UTC (permalink / raw)
To: Max Gurtovoy; +Cc: axboe, hch, virtualization, linux-block, Xie Yongji
On Wed, Mar 02, 2022 at 04:27:15PM +0200, Max Gurtovoy wrote:
>
> On 3/2/2022 4:15 PM, Michael S. Tsirkin wrote:
> > On Wed, Mar 02, 2022 at 03:45:10PM +0200, Max Gurtovoy wrote:
> > > On 3/2/2022 3:33 PM, Michael S. Tsirkin wrote:
> > > > On Wed, Mar 02, 2022 at 03:24:51PM +0200, Max Gurtovoy wrote:
> > > > > On 3/2/2022 3:17 PM, Michael S. Tsirkin wrote:
> > > > > > On Wed, Mar 02, 2022 at 11:51:27AM +0200, Max Gurtovoy wrote:
> > > > > > > On 3/1/2022 5:43 PM, Michael S. Tsirkin wrote:
> > > > > > > > On Mon, Feb 28, 2022 at 02:57:20PM +0800, Xie Yongji wrote:
> > > > > > > > > Currently we have a BUG_ON() to make sure the number of sg
> > > > > > > > > list does not exceed queue_max_segments() in virtio_queue_rq().
> > > > > > > > > However, the block layer uses queue_max_discard_segments()
> > > > > > > > > instead of queue_max_segments() to limit the sg list for
> > > > > > > > > discard requests. So the BUG_ON() might be triggered if
> > > > > > > > > virtio-blk device reports a larger value for max discard
> > > > > > > > > segment than queue_max_segments().
> > > > > > > > Hmm the spec does not say what should happen if max_discard_seg
> > > > > > > > exceeds seg_max. Is this the config you have in mind? how do you
> > > > > > > > create it?
> > > > > > > I don't think it's hard to create it. Just change some registers in the
> > > > > > > device.
> > > > > > >
> > > > > > > But with the dynamic sgl allocation that I added recently, there is no
> > > > > > > problem with this scenario.
> > > > > > Well the problem is device says it can't handle such large descriptors,
> > > > > > I guess it works anyway, but it seems scary.
> > > > > I don't follow.
> > > > >
> > > > > The only problem this patch solves is when a virtio blk device reports
> > > > > larger value for max_discard_segments than max_segments.
> > > > >
> > > > No, the peroblem reported is when virtio blk device reports
> > > > max_segments < 256 but not max_discard_segments.
> > > You mean the code will work in case device report max_discard_segments >
> > > max_segments ?
> > >
> > > I don't think so.
> > I think it's like this:
> >
> >
> > if (virtio_has_feature(vdev, VIRTIO_BLK_F_DISCARD)) {
> >
> > ....
> >
> > virtio_cread(vdev, struct virtio_blk_config, max_discard_seg,
> > &v);
> > blk_queue_max_discard_segments(q,
> > min_not_zero(v,
> > MAX_DISCARD_SEGMENTS));
> >
> > }
> >
> > so, IIUC the case is of a device that sets max_discard_seg to 0.
> >
> > Which is kind of broken, but we handled this since 2018 so I guess
> > we'll need to keep doing that.
>
> A device can't state VIRTIO_BLK_F_DISCARD and set max_discard_seg to 0.
>
> If so, it's a broken device and we can add a quirk for it.
Well we already have min_not_zero there, presumably for some reason.
> Do you have such device to test ?
Xie Yongji mentioned he does.
> >
> > > This is exactly what Xie Yongji mention in the commit message and what I was
> > > seeing.
> > >
> > > But the code will work if VIRTIO_BLK_F_DISCARD is not supported by the
> > > device (even if max_segments < 256) , since blk layer set
> > > queue_max_discard_segments = 1 in the initialization.
> > >
> > > And the virtio-blk driver won't change it unless VIRTIO_BLK_F_DISCARD is
> > > supported.
> > >
> > > > I would expect discard to follow max_segments restrictions then.
> > > >
> > > > > Probably no such devices, but we need to be prepared.
> > > > Right, question is how to handle this.
> > > >
> > > > > > > This commit looks good to me, thanks Xie Yongji.
> > > > > > >
> > > > > > > Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
> > > > > > >
> > > > > > > > > To fix it, let's simply
> > > > > > > > > remove the BUG_ON() which has become unnecessary after commit
> > > > > > > > > 02746e26c39e("virtio-blk: avoid preallocating big SGL for data").
> > > > > > > > > And the unused vblk->sg_elems can also be removed together.
> > > > > > > > >
> > > > > > > > > Fixes: 1f23816b8eb8 ("virtio_blk: add discard and write zeroes support")
> > > > > > > > > Suggested-by: Christoph Hellwig <hch@infradead.org>
> > > > > > > > > Signed-off-by: Xie Yongji <xieyongji@bytedance.com>
> > > > > > > > > ---
> > > > > > > > > drivers/block/virtio_blk.c | 10 +---------
> > > > > > > > > 1 file changed, 1 insertion(+), 9 deletions(-)
> > > > > > > > >
> > > > > > > > > diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> > > > > > > > > index c443cd64fc9b..a43eb1813cec 100644
> > > > > > > > > --- a/drivers/block/virtio_blk.c
> > > > > > > > > +++ b/drivers/block/virtio_blk.c
> > > > > > > > > @@ -76,9 +76,6 @@ struct virtio_blk {
> > > > > > > > > */
> > > > > > > > > refcount_t refs;
> > > > > > > > > - /* What host tells us, plus 2 for header & tailer. */
> > > > > > > > > - unsigned int sg_elems;
> > > > > > > > > -
> > > > > > > > > /* Ida index - used to track minor number allocations. */
> > > > > > > > > int index;
> > > > > > > > > @@ -322,8 +319,6 @@ static blk_status_t virtio_queue_rq(struct blk_mq_hw_ctx *hctx,
> > > > > > > > > blk_status_t status;
> > > > > > > > > int err;
> > > > > > > > > - BUG_ON(req->nr_phys_segments + 2 > vblk->sg_elems);
> > > > > > > > > -
> > > > > > > > > status = virtblk_setup_cmd(vblk->vdev, req, vbr);
> > > > > > > > > if (unlikely(status))
> > > > > > > > > return status;
> > > > > > > > > @@ -783,8 +778,6 @@ static int virtblk_probe(struct virtio_device *vdev)
> > > > > > > > > /* Prevent integer overflows and honor max vq size */
> > > > > > > > > sg_elems = min_t(u32, sg_elems, VIRTIO_BLK_MAX_SG_ELEMS - 2);
> > > > > > > > > - /* We need extra sg elements at head and tail. */
> > > > > > > > > - sg_elems += 2;
> > > > > > > > > vdev->priv = vblk = kmalloc(sizeof(*vblk), GFP_KERNEL);
> > > > > > > > > if (!vblk) {
> > > > > > > > > err = -ENOMEM;
> > > > > > > > > @@ -796,7 +789,6 @@ static int virtblk_probe(struct virtio_device *vdev)
> > > > > > > > > mutex_init(&vblk->vdev_mutex);
> > > > > > > > > vblk->vdev = vdev;
> > > > > > > > > - vblk->sg_elems = sg_elems;
> > > > > > > > > INIT_WORK(&vblk->config_work, virtblk_config_changed_work);
> > > > > > > > > @@ -853,7 +845,7 @@ static int virtblk_probe(struct virtio_device *vdev)
> > > > > > > > > set_disk_ro(vblk->disk, 1);
> > > > > > > > > /* We can handle whatever the host told us to handle. */
> > > > > > > > > - blk_queue_max_segments(q, vblk->sg_elems-2);
> > > > > > > > > + blk_queue_max_segments(q, sg_elems);
> > > > > > > > > /* No real sector limit. */
> > > > > > > > > blk_queue_max_hw_sectors(q, -1U);
> > > > > > > > > --
> > > > > > > > > 2.20.1
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2] virtio-blk: Remove BUG_ON() in virtio_queue_rq()
[not found] ` <CACycT3ubdASWTW3UN4Wxg2iYnXRaMkrfHty0p6h1E0EYPF82Yw@mail.gmail.com>
@ 2022-03-03 7:22 ` Michael S. Tsirkin
0 siblings, 0 replies; 9+ messages in thread
From: Michael S. Tsirkin @ 2022-03-03 7:22 UTC (permalink / raw)
To: Yongji Xie
Cc: Max Gurtovoy, Jens Axboe, Christoph Hellwig, virtualization,
linux-block
On Thu, Mar 03, 2022 at 11:31:35AM +0800, Yongji Xie wrote:
> On Wed, Mar 2, 2022 at 11:05 PM Max Gurtovoy <mgurtovoy@nvidia.com> wrote:
> >
> >
> > On 3/2/2022 3:15 PM, Michael S. Tsirkin wrote:
> > > On Wed, Mar 02, 2022 at 06:46:03PM +0800, Yongji Xie wrote:
> > >> On Tue, Mar 1, 2022 at 11:43 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> > >>> On Mon, Feb 28, 2022 at 02:57:20PM +0800, Xie Yongji wrote:
> > >>>> Currently we have a BUG_ON() to make sure the number of sg
> > >>>> list does not exceed queue_max_segments() in virtio_queue_rq().
> > >>>> However, the block layer uses queue_max_discard_segments()
> > >>>> instead of queue_max_segments() to limit the sg list for
> > >>>> discard requests. So the BUG_ON() might be triggered if
> > >>>> virtio-blk device reports a larger value for max discard
> > >>>> segment than queue_max_segments().
> > >>> Hmm the spec does not say what should happen if max_discard_seg
> > >>> exceeds seg_max. Is this the config you have in mind? how do you
> > >>> create it?
> > >>>
> > >> One example: the device doesn't specify the value of max_discard_seg
> > >> in the config space, then the virtio-blk driver will use
> > >> MAX_DISCARD_SEGMENTS (256) by default. Then we're able to trigger the
> > >> BUG_ON() if the seg_max is less than 256.
> > >>
> > >> While the spec didn't say what should happen if max_discard_seg
> > >> exceeds seg_max, it also doesn't explicitly prohibit this
> > >> configuration. So I think we should at least not panic the kernel in
> > >> this case.
> > >>
> > >> Thanks,
> > >> Yongji
> > > Oh that last one sounds like a bug, I think it should be
> > > min(MAX_DISCARD_SEGMENTS, seg_max)
> > >
> > > When max_discard_seg and seg_max both exist, that's a different question. We can
> > > - do min(max_discard_seg, seg_max)
> > > - fail probe
> > > - clear the relevant feature flag
> > >
> > > I feel we need a better plan than submitting an invalid request to device.
> >
> > We should cover only for a buggy devices.
> >
> > The situation that max_discard_seg > seg_max should be fine.
> >
> > Thus the bellow can be added to this patch:
> >
> > diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> > index c443cd64fc9b..3e372b97fe10 100644
> > --- a/drivers/block/virtio_blk.c
> > +++ b/drivers/block/virtio_blk.c
> > @@ -926,8 +926,8 @@ static int virtblk_probe(struct virtio_device *vdev)
> > virtio_cread(vdev, struct virtio_blk_config,
> > max_discard_seg,
> > &v);
> > blk_queue_max_discard_segments(q,
> > - min_not_zero(v,
> > - MAX_DISCARD_SEGMENTS));
> > + min_t(u32, (v ? v :
> > sg_elems),
> > + MAX_DISCARD_SEGMENTS));
> >
> > blk_queue_flag_set(QUEUE_FLAG_DISCARD, q);
> > }
> >
> >
>
> LGTM, I can add this in v3.
>
> Thanks,
> Yongji
Except the logic is convoluted then. I would instead add
/* max_seg == 0 is out of spec but we always handled it */
if (!v)
v = sg_elems;
--
MST
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2022-03-03 7:22 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20220228065720.100-1-xieyongji@bytedance.com>
2022-03-01 12:59 ` [PATCH v2] virtio-blk: Remove BUG_ON() in virtio_queue_rq() Christoph Hellwig
2022-03-01 15:43 ` Michael S. Tsirkin
[not found] ` <CACycT3uGFUjmuESUi9=Kkeg4FboVifAHD0D0gPTkEprcTP=x+g@mail.gmail.com>
2022-03-02 13:15 ` Michael S. Tsirkin
[not found] ` <8fa47a28-a974-4478-23b6-aea14355a315@nvidia.com>
[not found] ` <CACycT3ubdASWTW3UN4Wxg2iYnXRaMkrfHty0p6h1E0EYPF82Yw@mail.gmail.com>
2022-03-03 7:22 ` Michael S. Tsirkin
[not found] ` <85e61a65-4f76-afc0-272f-3b13333349f1@nvidia.com>
2022-03-02 13:17 ` Michael S. Tsirkin
[not found] ` <bd53b0dc-bef6-cd1a-ac5c-68766089a619@nvidia.com>
2022-03-02 13:33 ` Michael S. Tsirkin
[not found] ` <808fbd57-588d-03e3-2904-513f4bdcceaf@nvidia.com>
2022-03-02 14:15 ` Michael S. Tsirkin
[not found] ` <fe42c787-700c-d136-75b9-5a3e1b6d1b4f@nvidia.com>
2022-03-02 14:48 ` Michael S. Tsirkin
[not found] ` <CACycT3uJFNof7UNTdrEK2dVB-W9q4VVkVWnjos6TJawSRF+EDA@mail.gmail.com>
2022-03-02 14:20 ` Michael S. Tsirkin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).