* [PATCH] readahead: Update the file_ra_state.ra_pages with each readahead operation
@ 2023-10-30 7:41 Youling Tang
2023-10-30 16:47 ` Matthew Wilcox
0 siblings, 1 reply; 3+ messages in thread
From: Youling Tang @ 2023-10-30 7:41 UTC (permalink / raw)
To: Matthew Wilcox, Andrew Morton
Cc: linux-fsdevel, linux-mm, linux-kernel, tangyouling, youling.tang
From: Youling Tang <tangyouling@kylinos.cn>
Changing the read_ahead_kb value midway through a sequential read of a
large file found that the ra->ra_pages value remained unchanged (new
ra_pages can only be detected the next time the file is opened). Because
file_ra_state_init() is only called once in do_dentry_open() in most
cases.
In ondemand_readahead(), update bdi->ra_pages to ra->ra_pages to ensure
that the maximum pages that can be allocated by the readahead algorithm
are the same as (read_ahead_kb * 1024) / PAGE_SIZE after read_ahead_kb
is modified.
Signed-off-by: Youling Tang <tangyouling@kylinos.cn>
---
mm/readahead.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/readahead.c b/mm/readahead.c
index e815c114de21..3dbabf819187 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -554,12 +554,14 @@ static void ondemand_readahead(struct readahead_control *ractl,
{
struct backing_dev_info *bdi = inode_to_bdi(ractl->mapping->host);
struct file_ra_state *ra = ractl->ra;
- unsigned long max_pages = ra->ra_pages;
+ unsigned long max_pages;
unsigned long add_pages;
pgoff_t index = readahead_index(ractl);
pgoff_t expected, prev_index;
unsigned int order = folio ? folio_order(folio) : 0;
+ max_pages = ra->ra_pages = bdi->ra_pages;
+
/*
* If the request exceeds the readahead window, allow the read to
* be up to the optimal hardware IO size
--
2.25.1
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] readahead: Update the file_ra_state.ra_pages with each readahead operation
2023-10-30 7:41 [PATCH] readahead: Update the file_ra_state.ra_pages with each readahead operation Youling Tang
@ 2023-10-30 16:47 ` Matthew Wilcox
2023-10-31 1:56 ` Youling Tang
0 siblings, 1 reply; 3+ messages in thread
From: Matthew Wilcox @ 2023-10-30 16:47 UTC (permalink / raw)
To: Youling Tang
Cc: Andrew Morton, linux-fsdevel, linux-mm, linux-kernel, tangyouling
On Mon, Oct 30, 2023 at 03:41:30PM +0800, Youling Tang wrote:
> From: Youling Tang <tangyouling@kylinos.cn>
>
> Changing the read_ahead_kb value midway through a sequential read of a
> large file found that the ra->ra_pages value remained unchanged (new
> ra_pages can only be detected the next time the file is opened). Because
> file_ra_state_init() is only called once in do_dentry_open() in most
> cases.
>
> In ondemand_readahead(), update bdi->ra_pages to ra->ra_pages to ensure
> that the maximum pages that can be allocated by the readahead algorithm
> are the same as (read_ahead_kb * 1024) / PAGE_SIZE after read_ahead_kb
> is modified.
Explain to me why this is the correct behaviour.
Many things are only initialised at open() time and are not updated until
the next open(). This is longstanding behaviour that some apps expect.
Why should we change it?
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] readahead: Update the file_ra_state.ra_pages with each readahead operation
2023-10-30 16:47 ` Matthew Wilcox
@ 2023-10-31 1:56 ` Youling Tang
0 siblings, 0 replies; 3+ messages in thread
From: Youling Tang @ 2023-10-31 1:56 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Andrew Morton, linux-fsdevel, linux-mm, linux-kernel, tangyouling
Hi, Matthew
On 2023/10/31 上午12:47, Matthew Wilcox wrote:
> On Mon, Oct 30, 2023 at 03:41:30PM +0800, Youling Tang wrote:
>> From: Youling Tang <tangyouling@kylinos.cn>
>>
>> Changing the read_ahead_kb value midway through a sequential read of a
>> large file found that the ra->ra_pages value remained unchanged (new
>> ra_pages can only be detected the next time the file is opened). Because
>> file_ra_state_init() is only called once in do_dentry_open() in most
>> cases.
>>
>> In ondemand_readahead(), update bdi->ra_pages to ra->ra_pages to ensure
>> that the maximum pages that can be allocated by the readahead algorithm
>> are the same as (read_ahead_kb * 1024) / PAGE_SIZE after read_ahead_kb
>> is modified.
> Explain to me why this is the correct behaviour.
Because I initially expected to immediately improve the current read
performance
by modifying read_ahead_kb when reading large files sequentially.
> Many things are only initialised at open() time and are not updated until
> the next open(). This is longstanding behaviour that some apps expect.
Thanks for your explanation. I will discard this change if the next open
update is in line with the
apps expectation.
Thanks,
Youling.
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2023-10-31 1:56 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-10-30 7:41 [PATCH] readahead: Update the file_ra_state.ra_pages with each readahead operation Youling Tang
2023-10-30 16:47 ` Matthew Wilcox
2023-10-31 1:56 ` Youling Tang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).