* [PATCH] mm/filemap: fix page end in filemap_get_read_batch
@ 2023-01-04 3:21 coolqyj
2023-01-04 14:36 ` Matthew Wilcox
0 siblings, 1 reply; 3+ messages in thread
From: coolqyj @ 2023-01-04 3:21 UTC (permalink / raw)
To: linux-fsdevel, linux-mm; +Cc: Matthew Wilcox, stable, qian
From: Qian Yingjin <qian@ddn.com>
I was running traces of the read code against an RAID storage
system to understand why read requests were being misaligned
against the underlying RAID strips. I found that the page end
offset calculation in filemap_get_read_batch() was off by one.
When a read is submitted with end offset 1048575, then it
calculates the end page for read of 256 when it should be 255.
"last_index" is the index of the page beyond the end of the read
and it should be skipped when get a batch of pages for read in
@filemap_get_read_batch().
The below simple patch fixes the problem. This code was introduced
in kernel 5.12.
Fixes: cbd59c48ae2b ("mm/filemap: use head pages in generic_file_buffered_read")
Signed-off-by: Qian Yingjin <qian@ddn.com>
---
mm/filemap.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index c4d4ace9cc70..b7754760c09a 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2371,7 +2371,7 @@ static void shrink_readahead_size_eio(struct file_ra_state *ra)
* clear so that the caller can take the appropriate action.
*/
static void filemap_get_read_batch(struct address_space *mapping,
- pgoff_t index, pgoff_t max, struct folio_batch *fbatch)
+ pgoff_t index, pgoff_t last_index, struct folio_batch *fbatch)
{
XA_STATE(xas, &mapping->i_pages, index);
struct folio *folio;
@@ -2380,7 +2380,11 @@ static void filemap_get_read_batch(struct address_space *mapping,
for (folio = xas_load(&xas); folio; folio = xas_next(&xas)) {
if (xas_retry(&xas, folio))
continue;
- if (xas.xa_index > max || xa_is_value(folio))
+ /*
+ * "last_index" is the index of the page beyond the end of
+ * the read.
+ */
+ if (xas.xa_index >= last_index || xa_is_value(folio))
break;
if (xa_is_sibling(folio))
break;
@@ -2588,6 +2592,7 @@ static int filemap_get_pages(struct kiocb *iocb, struct iov_iter *iter,
struct folio *folio;
int err = 0;
+ /* "last_index" is the index of the page beyond the end of the read */
last_index = DIV_ROUND_UP(iocb->ki_pos + iter->count, PAGE_SIZE);
retry:
if (fatal_signal_pending(current))
--
2.34.1
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] mm/filemap: fix page end in filemap_get_read_batch
2023-01-04 3:21 coolqyj
@ 2023-01-04 14:36 ` Matthew Wilcox
0 siblings, 0 replies; 3+ messages in thread
From: Matthew Wilcox @ 2023-01-04 14:36 UTC (permalink / raw)
To: coolqyj; +Cc: linux-fsdevel, linux-mm, stable, qian
On Wed, Jan 04, 2023 at 11:21:24AM +0800, coolqyj@163.com wrote:
> From: Qian Yingjin <qian@ddn.com>
>
> I was running traces of the read code against an RAID storage
> system to understand why read requests were being misaligned
> against the underlying RAID strips. I found that the page end
> offset calculation in filemap_get_read_batch() was off by one.
>
> When a read is submitted with end offset 1048575, then it
> calculates the end page for read of 256 when it should be 255.
> "last_index" is the index of the page beyond the end of the read
> and it should be skipped when get a batch of pages for read in
> @filemap_get_read_batch().
>
> The below simple patch fixes the problem. This code was introduced
> in kernel 5.12.
Thanks for diagnosing & sending a patch. However, I'd really prefer
to work in terms of 'max' instead of 'last_index' in that function.
Would this work for you?
+++ b/mm/filemap.c
@@ -2595,13 +2595,13 @@ static int filemap_get_pages(struct kiocb *iocb, struct iov_iter *iter,
if (fatal_signal_pending(current))
return -EINTR;
- filemap_get_read_batch(mapping, index, last_index, fbatch);
+ filemap_get_read_batch(mapping, index, last_index - 1, fbatch);
if (!folio_batch_count(fbatch)) {
if (iocb->ki_flags & IOCB_NOIO)
return -EAGAIN;
page_cache_sync_readahead(mapping, ra, filp, index,
last_index - index);
- filemap_get_read_batch(mapping, index, last_index, fbatch);
+ filemap_get_read_batch(mapping, index, last_index - 1, fbatch);
}
if (!folio_batch_count(fbatch)) {
if (iocb->ki_flags & (IOCB_NOWAIT | IOCB_WAITQ))
^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH] mm/filemap: fix page end in filemap_get_read_batch
@ 2023-01-09 1:58 coolqyj
0 siblings, 0 replies; 3+ messages in thread
From: coolqyj @ 2023-01-09 1:58 UTC (permalink / raw)
To: linux-fsdevel, linux-mm; +Cc: Matthew Wilcox, stable, Qian Yingjin
From: Qian Yingjin <qian@ddn.com>
I was running traces of the read code against an RAID storage
system to understand why read requests were being misaligned
against the underlying RAID strips. I found that the page end
offset calculation in filemap_get_read_batch() was off by one.
When a read is submitted with end offset 1048575, then it
calculates the end page for read of 256 when it should be 255.
"last_index" is the index of the page beyond the end of the read
and it should be skipped when get a batch of pages for read in
@filemap_get_read_batch().
The below simple patch fixes the problem. This code was introduced
in kernel 5.12.
Fixes: cbd59c48ae2b ("mm/filemap: use head pages in generic_file_buffered_read")
Signed-off-by: Qian Yingjin <qian@ddn.com>
---
mm/filemap.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index c4d4ace9cc70..0e20a8d6dd93 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2588,18 +2588,19 @@ static int filemap_get_pages(struct kiocb *iocb, struct iov_iter *iter,
struct folio *folio;
int err = 0;
+ /* "last_index" is the index of the page beyond the end of the read */
last_index = DIV_ROUND_UP(iocb->ki_pos + iter->count, PAGE_SIZE);
retry:
if (fatal_signal_pending(current))
return -EINTR;
- filemap_get_read_batch(mapping, index, last_index, fbatch);
+ filemap_get_read_batch(mapping, index, last_index - 1, fbatch);
if (!folio_batch_count(fbatch)) {
if (iocb->ki_flags & IOCB_NOIO)
return -EAGAIN;
page_cache_sync_readahead(mapping, ra, filp, index,
last_index - index);
- filemap_get_read_batch(mapping, index, last_index, fbatch);
+ filemap_get_read_batch(mapping, index, last_index - 1, fbatch);
}
if (!folio_batch_count(fbatch)) {
if (iocb->ki_flags & (IOCB_NOWAIT | IOCB_WAITQ))
--
2.34.1
^ permalink raw reply related [flat|nested] 3+ messages in thread
end of thread, other threads:[~2023-01-09 1:59 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-01-09 1:58 [PATCH] mm/filemap: fix page end in filemap_get_read_batch coolqyj
-- strict thread matches above, loose matches on Subject: below --
2023-01-04 3:21 coolqyj
2023-01-04 14:36 ` Matthew Wilcox
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).