* [PATCH V2 0/2] nvmet: consider req->sg_cnt for inline bio
@ 2021-05-07 1:51 Chaitanya Kulkarni
2021-05-07 1:51 ` [PATCH V2 1/2] nvmet: fix inline bio check for bdev-ns Chaitanya Kulkarni
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Chaitanya Kulkarni @ 2021-05-07 1:51 UTC (permalink / raw)
To: linux-nvme; +Cc: hch, sagi, Chaitanya Kulkarni
Hi,
While testing I came across the scenario where checking for the
transfer size is not enough for inline bios, this issue is not easily
reproducible with the test tools that I have.
This small patch-series adds an additional check so that we make sure
transfer size and the req->sg_cnt both fit in the inline bio for
bdev and passthru backend.
-ck
* Changes from V1:-
1. Add a helper nvmet_use_inline_bio().
2. Add fixes tags.
3. Add reviewed by tags.
Chaitanya Kulkarni (2):
nvmet: fix inline bio check for bdev-ns
nvmet: fix inline bio check for passthru
drivers/nvme/target/io-cmd-bdev.c | 2 +-
drivers/nvme/target/nvmet.h | 6 ++++++
drivers/nvme/target/passthru.c | 2 +-
3 files changed, 8 insertions(+), 2 deletions(-)
--
2.22.1
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH V2 1/2] nvmet: fix inline bio check for bdev-ns
2021-05-07 1:51 [PATCH V2 0/2] nvmet: consider req->sg_cnt for inline bio Chaitanya Kulkarni
@ 2021-05-07 1:51 ` Chaitanya Kulkarni
2021-05-07 1:51 ` [PATCH V2 2/2] nvmet: fix inline bio check for passthru Chaitanya Kulkarni
2021-05-10 7:00 ` [PATCH V2 0/2] nvmet: consider req->sg_cnt for inline bio Christoph Hellwig
2 siblings, 0 replies; 4+ messages in thread
From: Chaitanya Kulkarni @ 2021-05-07 1:51 UTC (permalink / raw)
To: linux-nvme; +Cc: hch, sagi, Chaitanya Kulkarni
When handling rw commands, for inline bio case we only consider
transfer size. This works well when req->sg_cnt fits into the
req->inline_bvec, but it will result in the warning in
__bio_add_page() when req->sg_cnt > NVMET_MAX_INLINE_BVEC.
Consider an I/O size 32768 and first page is not aligned to the page
boundary, then I/O is split in following manner :-
[ 2206.256140] nvmet: sg->length 3440 sg->offset 656
[ 2206.256144] nvmet: sg->length 4096 sg->offset 0
[ 2206.256148] nvmet: sg->length 4096 sg->offset 0
[ 2206.256152] nvmet: sg->length 4096 sg->offset 0
[ 2206.256155] nvmet: sg->length 4096 sg->offset 0
[ 2206.256159] nvmet: sg->length 4096 sg->offset 0
[ 2206.256163] nvmet: sg->length 4096 sg->offset 0
[ 2206.256166] nvmet: sg->length 4096 sg->offset 0
[ 2206.256170] nvmet: sg->length 656 sg->offset 0
Now the req->transfer_size == NVMET_MAX_INLINE_DATA_LEN i.e. 32768, but
the req->sg_cnt is (9) > NVMET_MAX_INLINE_BIOVEC which is (8).
This will result in the following warning message :-
nvmet_bdev_execute_rw()
bio_add_page()
__bio_add_page()
WARN_ON_ONCE(bio_full(bio, len));
This scenario is very hard to reproduce on the nvme-loop transport only
with rw commands issued with the passthru IOCTL interface from the host
application and the data buffer is allocated with the malloc() and not
the posix_memalign().
Fixes: 73383adfad24 ("nvmet: don't split large I/Os unconditionally")
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
---
drivers/nvme/target/io-cmd-bdev.c | 2 +-
drivers/nvme/target/nvmet.h | 6 ++++++
2 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
index 9a8b3726a37c..429263ca9b97 100644
--- a/drivers/nvme/target/io-cmd-bdev.c
+++ b/drivers/nvme/target/io-cmd-bdev.c
@@ -258,7 +258,7 @@ static void nvmet_bdev_execute_rw(struct nvmet_req *req)
sector = nvmet_lba_to_sect(req->ns, req->cmd->rw.slba);
- if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) {
+ if (nvmet_use_inline_bvec(req)) {
bio = &req->b.inline_bio;
bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));
} else {
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 5566ed403576..d69a409515d6 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -616,4 +616,10 @@ static inline sector_t nvmet_lba_to_sect(struct nvmet_ns *ns, __le64 lba)
return le64_to_cpu(lba) << (ns->blksize_shift - SECTOR_SHIFT);
}
+static inline bool nvmet_use_inline_bvec(struct nvmet_req *req)
+{
+ return req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN &&
+ req->sg_cnt <= NVMET_MAX_INLINE_BIOVEC;
+}
+
#endif /* _NVMET_H */
--
2.22.1
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH V2 2/2] nvmet: fix inline bio check for passthru
2021-05-07 1:51 [PATCH V2 0/2] nvmet: consider req->sg_cnt for inline bio Chaitanya Kulkarni
2021-05-07 1:51 ` [PATCH V2 1/2] nvmet: fix inline bio check for bdev-ns Chaitanya Kulkarni
@ 2021-05-07 1:51 ` Chaitanya Kulkarni
2021-05-10 7:00 ` [PATCH V2 0/2] nvmet: consider req->sg_cnt for inline bio Christoph Hellwig
2 siblings, 0 replies; 4+ messages in thread
From: Chaitanya Kulkarni @ 2021-05-07 1:51 UTC (permalink / raw)
To: linux-nvme; +Cc: hch, sagi, Chaitanya Kulkarni
When handling passthru commands, for inline bio allocation we only
consider the transfer size. This works well when req->sg_cnt fits into
the req->inline_bvec, but it will result in the early return from
bio_add_hw_page() when req->sg_cnt > NVMET_MAX_INLINE_BVEC.
Consider an I/O of size 32768 and first buffer is not aligned to the
page boundary, then I/O is split in following manner :-
[ 2206.256140] nvmet: sg->length 3440 sg->offset 656
[ 2206.256144] nvmet: sg->length 4096 sg->offset 0
[ 2206.256148] nvmet: sg->length 4096 sg->offset 0
[ 2206.256152] nvmet: sg->length 4096 sg->offset 0
[ 2206.256155] nvmet: sg->length 4096 sg->offset 0
[ 2206.256159] nvmet: sg->length 4096 sg->offset 0
[ 2206.256163] nvmet: sg->length 4096 sg->offset 0
[ 2206.256166] nvmet: sg->length 4096 sg->offset 0
[ 2206.256170] nvmet: sg->length 656 sg->offset 0
Now the req->transfer_size == NVMET_MAX_INLINE_DATA_LEN i.e. 32768, but
the req->sg_cnt is (9) > NVMET_MAX_INLINE_BIOVEC which is (8).
This will result in early return in the following code path :-
nvmet_bdev_execute_rw()
bio_add_pc_page()
bio_add_hw_page()
if (bio_full(bio, len))
return 0;
Use previously introduced helper nvmet_use_inline_bvec() to consider
req->sg_cnt when using inline bio. This only affects nvme-loop
transport.
Fixes: dab3902b19a0 ("nvmet: use inline bio for passthru fast path")
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
---
drivers/nvme/target/passthru.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index 2798944899b7..39b1473f7204 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -194,7 +194,7 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq)
if (req->sg_cnt > BIO_MAX_VECS)
return -EINVAL;
- if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) {
+ if (nvmet_use_inline_bvec(req)) {
bio = &req->p.inline_bio;
bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec));
} else {
--
2.22.1
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH V2 0/2] nvmet: consider req->sg_cnt for inline bio
2021-05-07 1:51 [PATCH V2 0/2] nvmet: consider req->sg_cnt for inline bio Chaitanya Kulkarni
2021-05-07 1:51 ` [PATCH V2 1/2] nvmet: fix inline bio check for bdev-ns Chaitanya Kulkarni
2021-05-07 1:51 ` [PATCH V2 2/2] nvmet: fix inline bio check for passthru Chaitanya Kulkarni
@ 2021-05-10 7:00 ` Christoph Hellwig
2 siblings, 0 replies; 4+ messages in thread
From: Christoph Hellwig @ 2021-05-10 7:00 UTC (permalink / raw)
To: Chaitanya Kulkarni; +Cc: linux-nvme, hch, sagi
Thanks,
applied to nvme-5.13.
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2021-05-10 7:01 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-05-07 1:51 [PATCH V2 0/2] nvmet: consider req->sg_cnt for inline bio Chaitanya Kulkarni
2021-05-07 1:51 ` [PATCH V2 1/2] nvmet: fix inline bio check for bdev-ns Chaitanya Kulkarni
2021-05-07 1:51 ` [PATCH V2 2/2] nvmet: fix inline bio check for passthru Chaitanya Kulkarni
2021-05-10 7:00 ` [PATCH V2 0/2] nvmet: consider req->sg_cnt for inline bio Christoph Hellwig
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox