* [PATCHv2 1/3] block: ensure iov_iter advances for added pages
@ 2022-07-12 15:32 Keith Busch
2022-07-12 15:32 ` [PATCHv2 2/3] block: ensure bio_iov_add_page can't fail Keith Busch
` (3 more replies)
0 siblings, 4 replies; 5+ messages in thread
From: Keith Busch @ 2022-07-12 15:32 UTC (permalink / raw)
To: linux-fsdevel, linux-block; +Cc: Jens Axboe, Al Viro, Keith Busch
From: Keith Busch <kbusch@kernel.org>
There are cases where a bio may not accept additional pages, and the iov
needs to advance to the last data length that was accepted. The zone
append used to handle this correctly, but was inadvertently broken when
the setup was made common with the normal r/w case.
Fixes: 576ed9135489c ("block: use bio_add_page in bio_iov_iter_get_pages")
Fixes: c58c0074c54c2 ("block/bio: remove duplicate append pages code")
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
block/bio.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/block/bio.c b/block/bio.c
index 933ea3210954..fdd58461b78f 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1211,6 +1211,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
ssize_t size, left;
unsigned len, i;
size_t offset;
+ int ret = 0;
/*
* Move page array up in the allocated memory for the bio vecs as far as
@@ -1235,7 +1236,6 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
for (left = size, i = 0; left > 0; left -= len, i++) {
struct page *page = pages[i];
- int ret;
len = min_t(size_t, PAGE_SIZE - offset, left);
if (bio_op(bio) == REQ_OP_ZONE_APPEND)
@@ -1246,13 +1246,13 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
if (ret) {
bio_put_pages(pages + i, left, offset);
- return ret;
+ break;
}
offset = 0;
}
- iov_iter_advance(iter, size);
- return 0;
+ iov_iter_advance(iter, size - left);
+ return ret;
}
/**
--
2.30.2
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCHv2 2/3] block: ensure bio_iov_add_page can't fail
2022-07-12 15:32 [PATCHv2 1/3] block: ensure iov_iter advances for added pages Keith Busch
@ 2022-07-12 15:32 ` Keith Busch
2022-07-12 15:32 ` [PATCHv2 3/3] block: fix leaking page ref on truncated direct io Keith Busch
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Keith Busch @ 2022-07-12 15:32 UTC (permalink / raw)
To: linux-fsdevel, linux-block; +Cc: Jens Axboe, Al Viro, Keith Busch
From: Keith Busch <kbusch@kernel.org>
Adding the page could fail on the bio_full() condition, which checks for
either exceeding the bio's max segments or total size exceeding
UINT_MAX. We already ensure the max segments can't be exceeded, so just
ensure the total size won't reach the limit. This simplifies error
handling and removes unnecessary repeated bio_full() checks.
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
block/bio.c | 19 +++++++++----------
1 file changed, 9 insertions(+), 10 deletions(-)
diff --git a/block/bio.c b/block/bio.c
index fdd58461b78f..01223f8086ed 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1165,8 +1165,6 @@ static int bio_iov_add_page(struct bio *bio, struct page *page,
bool same_page = false;
if (!__bio_try_merge_page(bio, page, len, offset, &same_page)) {
- if (WARN_ON_ONCE(bio_full(bio, len)))
- return -EINVAL;
__bio_add_page(bio, page, len, offset);
return 0;
}
@@ -1228,7 +1226,8 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
* result to ensure the bio's total size is correct. The remainder of
* the iov data will be picked up in the next bio iteration.
*/
- size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);
+ size = iov_iter_get_pages(iter, pages, UINT_MAX - bio->bi_iter.bi_size,
+ nr_pages, &offset);
if (size > 0)
size = ALIGN_DOWN(size, bdev_logical_block_size(bio->bi_bdev));
if (unlikely(size <= 0))
@@ -1238,16 +1237,16 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
struct page *page = pages[i];
len = min_t(size_t, PAGE_SIZE - offset, left);
- if (bio_op(bio) == REQ_OP_ZONE_APPEND)
+ if (bio_op(bio) == REQ_OP_ZONE_APPEND) {
ret = bio_iov_add_zone_append_page(bio, page, len,
offset);
- else
- ret = bio_iov_add_page(bio, page, len, offset);
+ if (ret) {
+ bio_put_pages(pages + i, left, offset);
+ break;
+ }
+ } else
+ bio_iov_add_page(bio, page, len, offset);
- if (ret) {
- bio_put_pages(pages + i, left, offset);
- break;
- }
offset = 0;
}
--
2.30.2
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCHv2 3/3] block: fix leaking page ref on truncated direct io
2022-07-12 15:32 [PATCHv2 1/3] block: ensure iov_iter advances for added pages Keith Busch
2022-07-12 15:32 ` [PATCHv2 2/3] block: ensure bio_iov_add_page can't fail Keith Busch
@ 2022-07-12 15:32 ` Keith Busch
2022-07-12 20:08 ` [PATCHv2 1/3] block: ensure iov_iter advances for added pages Jens Axboe
2022-07-13 20:21 ` Jens Axboe
3 siblings, 0 replies; 5+ messages in thread
From: Keith Busch @ 2022-07-12 15:32 UTC (permalink / raw)
To: linux-fsdevel, linux-block; +Cc: Jens Axboe, Al Viro, Keith Busch
From: Keith Busch <kbusch@kernel.org>
The size being added to a bio from an iov is aligned to a block size
after the pages were gotten. If the new aligned size truncates the last
page, its reference was being leaked. Ensure all pages that were not
added to the bio have their reference released.
Since this essentially requires doing the same that bio_put_pages(), and
there was only one caller for that function, this patch makes the
put_page() loop common for everyone.
Fixes: b1a000d3b8ec5 ("block: relax direct io memory alignment")
Reported-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Keith Busch <kbusch@kernel.org>
---
v1->v2: fixed possible uninitialized variable
This update is also pushed to my repo,
https://git.kernel.org/pub/scm/linux/kernel/git/kbusch/linux.git/log/?h=alignment-fixes-rebased
block/bio.c | 30 +++++++++++++++---------------
1 file changed, 15 insertions(+), 15 deletions(-)
diff --git a/block/bio.c b/block/bio.c
index 01223f8086ed..de345a9b52db 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1151,14 +1151,6 @@ void bio_iov_bvec_set(struct bio *bio, struct iov_iter *iter)
bio_set_flag(bio, BIO_CLONED);
}
-static void bio_put_pages(struct page **pages, size_t size, size_t off)
-{
- size_t i, nr = DIV_ROUND_UP(size + (off & ~PAGE_MASK), PAGE_SIZE);
-
- for (i = 0; i < nr; i++)
- put_page(pages[i]);
-}
-
static int bio_iov_add_page(struct bio *bio, struct page *page,
unsigned int len, unsigned int offset)
{
@@ -1207,7 +1199,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
struct page **pages = (struct page **)bv;
ssize_t size, left;
- unsigned len, i;
+ unsigned len, i = 0;
size_t offset;
int ret = 0;
@@ -1228,10 +1220,16 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
*/
size = iov_iter_get_pages(iter, pages, UINT_MAX - bio->bi_iter.bi_size,
nr_pages, &offset);
- if (size > 0)
+ if (size > 0) {
+ nr_pages = DIV_ROUND_UP(offset + size, PAGE_SIZE);
size = ALIGN_DOWN(size, bdev_logical_block_size(bio->bi_bdev));
- if (unlikely(size <= 0))
- return size ? size : -EFAULT;
+ } else
+ nr_pages = 0;
+
+ if (unlikely(size <= 0)) {
+ ret = size ? size : -EFAULT;
+ goto out;
+ }
for (left = size, i = 0; left > 0; left -= len, i++) {
struct page *page = pages[i];
@@ -1240,10 +1238,8 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
if (bio_op(bio) == REQ_OP_ZONE_APPEND) {
ret = bio_iov_add_zone_append_page(bio, page, len,
offset);
- if (ret) {
- bio_put_pages(pages + i, left, offset);
+ if (ret)
break;
- }
} else
bio_iov_add_page(bio, page, len, offset);
@@ -1251,6 +1247,10 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
}
iov_iter_advance(iter, size - left);
+out:
+ while (i < nr_pages)
+ put_page(pages[i++]);
+
return ret;
}
--
2.30.2
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCHv2 1/3] block: ensure iov_iter advances for added pages
2022-07-12 15:32 [PATCHv2 1/3] block: ensure iov_iter advances for added pages Keith Busch
2022-07-12 15:32 ` [PATCHv2 2/3] block: ensure bio_iov_add_page can't fail Keith Busch
2022-07-12 15:32 ` [PATCHv2 3/3] block: fix leaking page ref on truncated direct io Keith Busch
@ 2022-07-12 20:08 ` Jens Axboe
2022-07-13 20:21 ` Jens Axboe
3 siblings, 0 replies; 5+ messages in thread
From: Jens Axboe @ 2022-07-12 20:08 UTC (permalink / raw)
To: Keith Busch, linux-fsdevel, linux-block; +Cc: Al Viro, Keith Busch
On 7/12/22 9:32 AM, Keith Busch wrote:
> From: Keith Busch <kbusch@kernel.org>
>
> There are cases where a bio may not accept additional pages, and the iov
> needs to advance to the last data length that was accepted. The zone
> append used to handle this correctly, but was inadvertently broken when
> the setup was made common with the normal r/w case.
Al, how do you want to handle this? I currently see you have that
block-fixes branch, but I don't see anything depending on it. I can do
one of the following with these three fixes:
1) Apply them on top of for-5.20/block
2) Apply them to a new branch off the tag I made for you
And that still leaves the question of what will happen with your
block-fixes branch. Did you want me to pull that in? Or?
--
Jens Axboe
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCHv2 1/3] block: ensure iov_iter advances for added pages
2022-07-12 15:32 [PATCHv2 1/3] block: ensure iov_iter advances for added pages Keith Busch
` (2 preceding siblings ...)
2022-07-12 20:08 ` [PATCHv2 1/3] block: ensure iov_iter advances for added pages Jens Axboe
@ 2022-07-13 20:21 ` Jens Axboe
3 siblings, 0 replies; 5+ messages in thread
From: Jens Axboe @ 2022-07-13 20:21 UTC (permalink / raw)
To: linux-block, linux-fsdevel, kbusch; +Cc: Al Viro, kbusch
On Tue, 12 Jul 2022 08:32:54 -0700, Keith Busch wrote:
> From: Keith Busch <kbusch@kernel.org>
>
> There are cases where a bio may not accept additional pages, and the iov
> needs to advance to the last data length that was accepted. The zone
> append used to handle this correctly, but was inadvertently broken when
> the setup was made common with the normal r/w case.
>
> [...]
Applied, thanks!
[1/3] block: ensure iov_iter advances for added pages
commit: 5a044eef1265581683530e75351c19e29ee33a11
[2/3] block: ensure bio_iov_add_page can't fail
commit: ac3c48e32c047a3781d6bc28bb5013e4431350fd
[3/3] block: fix leaking page ref on truncated direct io
commit: 44b6b0b0e980d99d24de7e5d57baae48a78db3b6
Best regards,
--
Jens Axboe
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2022-07-13 20:21 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-07-12 15:32 [PATCHv2 1/3] block: ensure iov_iter advances for added pages Keith Busch
2022-07-12 15:32 ` [PATCHv2 2/3] block: ensure bio_iov_add_page can't fail Keith Busch
2022-07-12 15:32 ` [PATCHv2 3/3] block: fix leaking page ref on truncated direct io Keith Busch
2022-07-12 20:08 ` [PATCHv2 1/3] block: ensure iov_iter advances for added pages Jens Axboe
2022-07-13 20:21 ` Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox