* [PATCH v2] filemap: optimize order0 folio in filemap_map_pages
@ 2025-09-03 8:42 Jinjiang Tu
2025-09-03 9:16 ` David Hildenbrand
0 siblings, 1 reply; 6+ messages in thread
From: Jinjiang Tu @ 2025-09-03 8:42 UTC (permalink / raw)
To: willy, akpm, david, linux-mm; +Cc: wangkefeng.wang, tujinjiang
There are two meaningless folio refcount update for order0 folio in
filemap_map_pages(). First, filemap_map_order0_folio() adds folio refcount
after the folio is mapped to pte. And then, filemap_map_pages() drops a
refcount grabbed by next_uptodate_folio(). We could remain the refcount
unchanged in this case.
As Matthew metenioned in [1], it is safe to call folio_unlock() before
calling folio_put() here, because the folio is in page cache with refcount
held, and truncation will wait for the unlock.
With this patch, we can get 8% performance gain for lmbench testcase
'lat_pagefault -P 1 file', the size of file is 512M.
[1]: https://lore.kernel.org/all/aKcU-fzxeW3xT5Wv@casper.infradead.org/
Signed-off-by: Jinjiang Tu <tujinjiang@huawei.com>
---
v2:
* Don't move folio_unlock(), suggested by Matthew.
mm/filemap.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 751838ef05e5..3da8bf8b93ec 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3693,6 +3693,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
}
vmf->pte = old_ptep;
+ folio_put(folio);
return ret;
}
@@ -3705,7 +3706,7 @@ static vm_fault_t filemap_map_order0_folio(struct vm_fault *vmf,
struct page *page = &folio->page;
if (PageHWPoison(page))
- return ret;
+ goto out;
/* See comment of filemap_map_folio_range() */
if (!folio_test_workingset(folio))
@@ -3717,15 +3718,17 @@ static vm_fault_t filemap_map_order0_folio(struct vm_fault *vmf,
* the fault-around logic.
*/
if (!pte_none(ptep_get(vmf->pte)))
- return ret;
+ goto out;
if (vmf->address == addr)
ret = VM_FAULT_NOPAGE;
set_pte_range(vmf, folio, page, 1, addr);
(*rss)++;
- folio_ref_inc(folio);
+ return ret;
+out:
+ folio_put(folio);
return ret;
}
@@ -3785,7 +3788,6 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf,
nr_pages, &rss, &mmap_miss);
folio_unlock(folio);
- folio_put(folio);
} while ((folio = next_uptodate_folio(&xas, mapping, end_pgoff)) != NULL);
add_mm_counter(vma->vm_mm, folio_type, rss);
pte_unmap_unlock(vmf->pte, vmf->ptl);
--
2.43.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v2] filemap: optimize order0 folio in filemap_map_pages
2025-09-03 8:42 [PATCH v2] filemap: optimize order0 folio in filemap_map_pages Jinjiang Tu
@ 2025-09-03 9:16 ` David Hildenbrand
2025-09-04 1:05 ` Jinjiang Tu
0 siblings, 1 reply; 6+ messages in thread
From: David Hildenbrand @ 2025-09-03 9:16 UTC (permalink / raw)
To: Jinjiang Tu, willy, akpm, linux-mm; +Cc: wangkefeng.wang
> +++ b/mm/filemap.c
> @@ -3693,6 +3693,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
> }
>
> vmf->pte = old_ptep;
> + folio_put(folio);
>
> return ret;
> }
> @@ -3705,7 +3706,7 @@ static vm_fault_t filemap_map_order0_folio(struct vm_fault *vmf,
> struct page *page = &folio->page;
>
> if (PageHWPoison(page))
> - return ret;
> + goto out;
>
> /* See comment of filemap_map_folio_range() */
> if (!folio_test_workingset(folio))
> @@ -3717,15 +3718,17 @@ static vm_fault_t filemap_map_order0_folio(struct vm_fault *vmf,
> * the fault-around logic.
> */
> if (!pte_none(ptep_get(vmf->pte)))
> - return ret;
> + goto out;
>
> if (vmf->address == addr)
> ret = VM_FAULT_NOPAGE;
>
> set_pte_range(vmf, folio, page, 1, addr);
> (*rss)++;
> - folio_ref_inc(folio);
> + return ret;
>
> +out:
> + folio_put(folio);
We can use a folio_ref_dec() here
/* Locked folios cannot get truncated. */
folio_ref_dec(folio);
> return ret;
> }
>
> @@ -3785,7 +3788,6 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf,
> nr_pages, &rss, &mmap_miss);
>
> folio_unlock(folio);
> - folio_put(folio);
> } while ((folio = next_uptodate_folio(&xas, mapping, end_pgoff)) != NULL);
> add_mm_counter(vma->vm_mm, folio_type, rss);
> pte_unmap_unlock(vmf->pte, vmf->ptl);
I think we can optimize filemap_map_folio_range() as well:
diff --git a/mm/filemap.c b/mm/filemap.c
index b101405b770ae..d1fcddc72c5f6 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3646,6 +3646,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
unsigned long addr, unsigned int nr_pages,
unsigned long *rss, unsigned short *mmap_miss)
{
+ bool ref_from_caller = true;
vm_fault_t ret = 0;
struct page *page = folio_page(folio, start);
unsigned int count = 0;
@@ -3679,7 +3680,9 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
if (count) {
set_pte_range(vmf, folio, page, count, addr);
*rss += count;
- folio_ref_add(folio, count);
+ if (count - ref_from_caller)
+ folio_ref_add(folio, count - ref_from_caller);
+ ref_from_caller = false;
if (in_range(vmf->address, addr, count * PAGE_SIZE))
ret = VM_FAULT_NOPAGE;
}
@@ -3694,13 +3697,19 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
if (count) {
set_pte_range(vmf, folio, page, count, addr);
*rss += count;
- folio_ref_add(folio, count);
+ if (count - ref_from_caller)
+ folio_ref_add(folio, count - ref_from_caller);
+ ref_from_caller = false;
if (in_range(vmf->address, addr, count * PAGE_SIZE))
ret = VM_FAULT_NOPAGE;
}
vmf->pte = old_ptep;
+ if (ref_from_caller)
+ /* Locked folios cannot get truncated. */
+ folio_ref_dec(folio);
+
return ret;
}
It would save at least a folio_ref_dec(), and in corner cases (only map a single page)
also a folio_ref_add().
--
Cheers
David / dhildenb
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v2] filemap: optimize order0 folio in filemap_map_pages
2025-09-03 9:16 ` David Hildenbrand
@ 2025-09-04 1:05 ` Jinjiang Tu
2025-09-04 1:06 ` Jinjiang Tu
0 siblings, 1 reply; 6+ messages in thread
From: Jinjiang Tu @ 2025-09-04 1:05 UTC (permalink / raw)
To: David Hildenbrand, willy, akpm, linux-mm; +Cc: wangkefeng.wang
[-- Attachment #1: Type: text/plain, Size: 6210 bytes --]
在 2025/9/3 17:16, David Hildenbrand 写道:
>> +++ b/mm/filemap.c
>> @@ -3693,6 +3693,7 @@ static vm_fault_t
>> filemap_map_folio_range(struct vm_fault *vmf,
>> }
>> vmf->pte = old_ptep;
>> + folio_put(folio);
>> return ret;
>> }
>> @@ -3705,7 +3706,7 @@ static vm_fault_t
>> filemap_map_order0_folio(struct vm_fault *vmf,
>> struct page *page = &folio->page;
>> if (PageHWPoison(page))
>> - return ret;
>> + goto out;
>> /* See comment of filemap_map_folio_range() */
>> if (!folio_test_workingset(folio))
>> @@ -3717,15 +3718,17 @@ static vm_fault_t
>> filemap_map_order0_folio(struct vm_fault *vmf,
>> * the fault-around logic.
>> */
>> if (!pte_none(ptep_get(vmf->pte)))
>> - return ret;
>> + goto out;
>> if (vmf->address == addr)
>> ret = VM_FAULT_NOPAGE;
>> set_pte_range(vmf, folio, page, 1, addr);
>> (*rss)++;
>> - folio_ref_inc(folio);
>> + return ret;
>> +out:
>> + folio_put(folio);
>
> We can use a folio_ref_dec() here
>
> /* Locked folios cannot get truncated. */
> folio_ref_dec(folio);
>
>> return ret;
>> }
>> @@ -3785,7 +3788,6 @@ vm_fault_t filemap_map_pages(struct vm_fault
>> *vmf,
>> nr_pages, &rss, &mmap_miss);
>> folio_unlock(folio);
>> - folio_put(folio);
>> } while ((folio = next_uptodate_folio(&xas, mapping,
>> end_pgoff)) != NULL);
>> add_mm_counter(vma->vm_mm, folio_type, rss);
>> pte_unmap_unlock(vmf->pte, vmf->ptl);
>
>
> I think we can optimize filemap_map_folio_range() as well:
>
> diff --git a/mm/filemap.c b/mm/filemap.c
> index b101405b770ae..d1fcddc72c5f6 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -3646,6 +3646,7 @@ static vm_fault_t filemap_map_folio_range(struct
> vm_fault *vmf,
> unsigned long addr, unsigned int nr_pages,
> unsigned long *rss, unsigned short *mmap_miss)
> {
> + bool ref_from_caller = true;
> vm_fault_t ret = 0;
> struct page *page = folio_page(folio, start);
> unsigned int count = 0;
> @@ -3679,7 +3680,9 @@ static vm_fault_t filemap_map_folio_range(struct
> vm_fault *vmf,
> if (count) {
> set_pte_range(vmf, folio, page, count, addr);
> *rss += count;
> - folio_ref_add(folio, count);
> + if (count - ref_from_caller)
> + folio_ref_add(folio, count -
> ref_from_caller);
> + ref_from_caller = false;
> if (in_range(vmf->address, addr, count *
> PAGE_SIZE))
> ret = VM_FAULT_NOPAGE;
> }
> @@ -3694,13 +3697,19 @@ static vm_fault_t
> filemap_map_folio_range(struct vm_fault *vmf,
> if (count) {
> set_pte_range(vmf, folio, page, count, addr);
> *rss += count;
> - folio_ref_add(folio, count);
> + if (count - ref_from_caller)
> + folio_ref_add(folio, count - ref_from_caller);
> + ref_from_caller = false;
> if (in_range(vmf->address, addr, count * PAGE_SIZE))
> ret = VM_FAULT_NOPAGE;
> }
>
> vmf->pte = old_ptep;
>
> + if (ref_from_caller)
> + /* Locked folios cannot get truncated. */
> + folio_ref_dec(folio);
> +
> return ret;
> }
>
>
> It would save at least a folio_ref_dec(), and in corner cases (only
> map a single page)
> also a folio_ref_add().
>
Maybe We can first count the refcount to add, and only call folio_ref_{add, sub} once before return
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3643,6 +3643,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
struct page *page = folio_page(folio, start);
unsigned int count = 0;
pte_t *old_ptep = vmf->pte;
+ int ref_to_add = -1;
do {
if (PageHWPoison(page + count))
@@ -3672,7 +3673,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
if (count) {
set_pte_range(vmf, folio, page, count, addr);
*rss += count;
- folio_ref_add(folio, count);
+ ref_to_add += count;
if (in_range(vmf->address, addr, count * PAGE_SIZE))
ret = VM_FAULT_NOPAGE;
}
@@ -3687,12 +3688,17 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
if (count) {
set_pte_range(vmf, folio, page, count, addr);
*rss += count;
- folio_ref_add(folio, count);
+ ref_to_add += count;
if (in_range(vmf->address, addr, count * PAGE_SIZE))
ret = VM_FAULT_NOPAGE;
}
vmf->pte = old_ptep;
+ /* Locked folios cannot get truncated. */
+ if (ref_to_add > 0)
+ folio_ref_add(folio, ref_to_add);
+ else if (ref_to_add < 0)
+ folio_ref_sub(folio, ref_to_add);
return ret;
}
[-- Attachment #2: Type: text/html, Size: 8239 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2] filemap: optimize order0 folio in filemap_map_pages
2025-09-04 1:05 ` Jinjiang Tu
@ 2025-09-04 1:06 ` Jinjiang Tu
2025-09-04 6:20 ` David Hildenbrand
0 siblings, 1 reply; 6+ messages in thread
From: Jinjiang Tu @ 2025-09-04 1:06 UTC (permalink / raw)
To: David Hildenbrand, willy, akpm, linux-mm; +Cc: wangkefeng.wang
[-- Attachment #1: Type: text/plain, Size: 6541 bytes --]
在 2025/9/4 9:05, Jinjiang Tu 写道:
>
>
> 在 2025/9/3 17:16, David Hildenbrand 写道:
>>> +++ b/mm/filemap.c
>>> @@ -3693,6 +3693,7 @@ static vm_fault_t
>>> filemap_map_folio_range(struct vm_fault *vmf,
>>> }
>>> vmf->pte = old_ptep;
>>> + folio_put(folio);
>>> return ret;
>>> }
>>> @@ -3705,7 +3706,7 @@ static vm_fault_t
>>> filemap_map_order0_folio(struct vm_fault *vmf,
>>> struct page *page = &folio->page;
>>> if (PageHWPoison(page))
>>> - return ret;
>>> + goto out;
>>> /* See comment of filemap_map_folio_range() */
>>> if (!folio_test_workingset(folio))
>>> @@ -3717,15 +3718,17 @@ static vm_fault_t
>>> filemap_map_order0_folio(struct vm_fault *vmf,
>>> * the fault-around logic.
>>> */
>>> if (!pte_none(ptep_get(vmf->pte)))
>>> - return ret;
>>> + goto out;
>>> if (vmf->address == addr)
>>> ret = VM_FAULT_NOPAGE;
>>> set_pte_range(vmf, folio, page, 1, addr);
>>> (*rss)++;
>>> - folio_ref_inc(folio);
>>> + return ret;
>>> +out:
>>> + folio_put(folio);
>>
>> We can use a folio_ref_dec() here
>>
>> /* Locked folios cannot get truncated. */
>> folio_ref_dec(folio);
>>
>>> return ret;
>>> }
>>> @@ -3785,7 +3788,6 @@ vm_fault_t filemap_map_pages(struct vm_fault
>>> *vmf,
>>> nr_pages, &rss, &mmap_miss);
>>> folio_unlock(folio);
>>> - folio_put(folio);
>>> } while ((folio = next_uptodate_folio(&xas, mapping,
>>> end_pgoff)) != NULL);
>>> add_mm_counter(vma->vm_mm, folio_type, rss);
>>> pte_unmap_unlock(vmf->pte, vmf->ptl);
>>
>>
>> I think we can optimize filemap_map_folio_range() as well:
>>
>> diff --git a/mm/filemap.c b/mm/filemap.c
>> index b101405b770ae..d1fcddc72c5f6 100644
>> --- a/mm/filemap.c
>> +++ b/mm/filemap.c
>> @@ -3646,6 +3646,7 @@ static vm_fault_t
>> filemap_map_folio_range(struct vm_fault *vmf,
>> unsigned long addr, unsigned int nr_pages,
>> unsigned long *rss, unsigned short *mmap_miss)
>> {
>> + bool ref_from_caller = true;
>> vm_fault_t ret = 0;
>> struct page *page = folio_page(folio, start);
>> unsigned int count = 0;
>> @@ -3679,7 +3680,9 @@ static vm_fault_t
>> filemap_map_folio_range(struct vm_fault *vmf,
>> if (count) {
>> set_pte_range(vmf, folio, page, count, addr);
>> *rss += count;
>> - folio_ref_add(folio, count);
>> + if (count - ref_from_caller)
>> + folio_ref_add(folio, count -
>> ref_from_caller);
>> + ref_from_caller = false;
>> if (in_range(vmf->address, addr, count *
>> PAGE_SIZE))
>> ret = VM_FAULT_NOPAGE;
>> }
>> @@ -3694,13 +3697,19 @@ static vm_fault_t
>> filemap_map_folio_range(struct vm_fault *vmf,
>> if (count) {
>> set_pte_range(vmf, folio, page, count, addr);
>> *rss += count;
>> - folio_ref_add(folio, count);
>> + if (count - ref_from_caller)
>> + folio_ref_add(folio, count - ref_from_caller);
>> + ref_from_caller = false;
>> if (in_range(vmf->address, addr, count * PAGE_SIZE))
>> ret = VM_FAULT_NOPAGE;
>> }
>>
>> vmf->pte = old_ptep;
>>
>> + if (ref_from_caller)
>> + /* Locked folios cannot get truncated. */
>> + folio_ref_dec(folio);
>> +
>> return ret;
>> }
>>
>>
>> It would save at least a folio_ref_dec(), and in corner cases (only
>> map a single page)
>> also a folio_ref_add().
>>
> Maybe We can first count the refcount to add, and only call folio_ref_{add, sub} once before return
>
>
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -3643,6 +3643,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
> struct page *page = folio_page(folio, start);
> unsigned int count = 0;
> pte_t *old_ptep = vmf->pte;
> + int ref_to_add = -1;
>
> do {
> if (PageHWPoison(page + count))
> @@ -3672,7 +3673,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
> if (count) {
> set_pte_range(vmf, folio, page, count, addr);
> *rss += count;
> - folio_ref_add(folio, count);
> + ref_to_add += count;
> if (in_range(vmf->address, addr, count * PAGE_SIZE))
> ret = VM_FAULT_NOPAGE;
> }
> @@ -3687,12 +3688,17 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf,
> if (count) {
> set_pte_range(vmf, folio, page, count, addr);
> *rss += count;
> - folio_ref_add(folio, count);
> + ref_to_add += count;
> if (in_range(vmf->address, addr, count * PAGE_SIZE))
> ret = VM_FAULT_NOPAGE;
> }
>
> vmf->pte = old_ptep;
> + /* Locked folios cannot get truncated. */
> + if (ref_to_add > 0)
> + folio_ref_add(folio, ref_to_add);
> + else if (ref_to_add < 0)
> + folio_ref_sub(folio, ref_to_add);
+ else if (ref_to_add < 0)
+ folio_ref_sub(folio, -ref_to_add);
>
> return ret;
> }
>
>
[-- Attachment #2: Type: text/html, Size: 8495 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2] filemap: optimize order0 folio in filemap_map_pages
2025-09-04 1:06 ` Jinjiang Tu
@ 2025-09-04 6:20 ` David Hildenbrand
2025-09-04 12:12 ` Jinjiang Tu
0 siblings, 1 reply; 6+ messages in thread
From: David Hildenbrand @ 2025-09-04 6:20 UTC (permalink / raw)
To: Jinjiang Tu, willy, akpm, linux-mm; +Cc: wangkefeng.wang
On 04.09.25 03:06, Jinjiang Tu wrote:
>
> 在 2025/9/4 9:05, Jinjiang Tu 写道:
>>
>>
>> 在 2025/9/3 17:16, David Hildenbrand 写道:
>>>> +++ b/mm/filemap.c
>>>> @@ -3693,6 +3693,7 @@ static vm_fault_t
>>>> filemap_map_folio_range(struct vm_fault *vmf,
>>>> }
>>>> vmf->pte = old_ptep;
>>>> + folio_put(folio);
>>>> return ret;
>>>> }
>>>> @@ -3705,7 +3706,7 @@ static vm_fault_t
>>>> filemap_map_order0_folio(struct vm_fault *vmf,
>>>> struct page *page = &folio->page;
>>>> if (PageHWPoison(page))
>>>> - return ret;
>>>> + goto out;
>>>> /* See comment of filemap_map_folio_range() */
>>>> if (!folio_test_workingset(folio))
>>>> @@ -3717,15 +3718,17 @@ static vm_fault_t
>>>> filemap_map_order0_folio(struct vm_fault *vmf,
>>>> * the fault-around logic.
>>>> */
>>>> if (!pte_none(ptep_get(vmf->pte)))
>>>> - return ret;
>>>> + goto out;
>>>> if (vmf->address == addr)
>>>> ret = VM_FAULT_NOPAGE;
>>>> set_pte_range(vmf, folio, page, 1, addr);
>>>> (*rss)++;
>>>> - folio_ref_inc(folio);
>>>> + return ret;
>>>> +out:
>>>> + folio_put(folio);
>>>
>>> We can use a folio_ref_dec() here
>>>
>>> /* Locked folios cannot get truncated. */
>>> folio_ref_dec(folio);
>>>
>>>> return ret;
>>>> }
>>>> @@ -3785,7 +3788,6 @@ vm_fault_t filemap_map_pages(struct vm_fault
>>>> *vmf,
>>>> nr_pages, &rss, &mmap_miss);
>>>> folio_unlock(folio);
>>>> - folio_put(folio);
>>>> } while ((folio = next_uptodate_folio(&xas, mapping,
>>>> end_pgoff)) != NULL);
>>>> add_mm_counter(vma->vm_mm, folio_type, rss);
>>>> pte_unmap_unlock(vmf->pte, vmf->ptl);
>>>
>>>
>>> I think we can optimize filemap_map_folio_range() as well:
>>>
>>> diff --git a/mm/filemap.c b/mm/filemap.c
>>> index b101405b770ae..d1fcddc72c5f6 100644
>>> --- a/mm/filemap.c
>>> +++ b/mm/filemap.c
>>> @@ -3646,6 +3646,7 @@ static vm_fault_t
>>> filemap_map_folio_range(struct vm_fault *vmf,
>>> unsigned long addr, unsigned int nr_pages,
>>> unsigned long *rss, unsigned short *mmap_miss)
>>> {
>>> + bool ref_from_caller = true;
>>> vm_fault_t ret = 0;
>>> struct page *page = folio_page(folio, start);
>>> unsigned int count = 0;
>>> @@ -3679,7 +3680,9 @@ static vm_fault_t
>>> filemap_map_folio_range(struct vm_fault *vmf,
>>> if (count) {
>>> set_pte_range(vmf, folio, page, count, addr);
>>> *rss += count;
>>> - folio_ref_add(folio, count);
>>> + if (count - ref_from_caller)
>>> + folio_ref_add(folio, count -
>>> ref_from_caller);
>>> + ref_from_caller = false;
>>> if (in_range(vmf->address, addr, count *
>>> PAGE_SIZE))
>>> ret = VM_FAULT_NOPAGE;
>>> }
>>> @@ -3694,13 +3697,19 @@ static vm_fault_t
>>> filemap_map_folio_range(struct vm_fault *vmf,
>>> if (count) {
>>> set_pte_range(vmf, folio, page, count, addr);
>>> *rss += count;
>>> - folio_ref_add(folio, count);
>>> + if (count - ref_from_caller)
>>> + folio_ref_add(folio, count - ref_from_caller);
>>> + ref_from_caller = false;
>>> if (in_range(vmf->address, addr, count * PAGE_SIZE))
>>> ret = VM_FAULT_NOPAGE;
>>> }
>>>
>>> vmf->pte = old_ptep;
>>>
>>> + if (ref_from_caller)
>>> + /* Locked folios cannot get truncated. */
>>> + folio_ref_dec(folio);
>>> +
>>> return ret;
>>> }
>>>
>>>
>>> It would save at least a folio_ref_dec(), and in corner cases (only
>>> map a single page)
>>> also a folio_ref_add().
>>>
>> Maybe We can first count the refcount to add, and only call folio_ref_{add, sub} once before return
I'm not a fan of that, because I'm planning on moving the
folio_ref_add() before the set_pte_range() so we can minimize the number
of false positives with our folio_ref_count() !=
folio_expected_ref_count() checks, and I can sanity check when adjusting
the mapcount that it is always >= refcount.
--
Cheers
David / dhildenb
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2] filemap: optimize order0 folio in filemap_map_pages
2025-09-04 6:20 ` David Hildenbrand
@ 2025-09-04 12:12 ` Jinjiang Tu
0 siblings, 0 replies; 6+ messages in thread
From: Jinjiang Tu @ 2025-09-04 12:12 UTC (permalink / raw)
To: David Hildenbrand, willy, akpm, linux-mm; +Cc: wangkefeng.wang
[-- Attachment #1: Type: text/plain, Size: 5327 bytes --]
在 2025/9/4 14:20, David Hildenbrand 写道:
> On 04.09.25 03:06, Jinjiang Tu wrote:
>>
>> 在 2025/9/4 9:05, Jinjiang Tu 写道:
>>>
>>>
>>> 在 2025/9/3 17:16, David Hildenbrand 写道:
>>>>> +++ b/mm/filemap.c
>>>>> @@ -3693,6 +3693,7 @@ static vm_fault_t
>>>>> filemap_map_folio_range(struct vm_fault *vmf,
>>>>> }
>>>>> vmf->pte = old_ptep;
>>>>> + folio_put(folio);
>>>>> return ret;
>>>>> }
>>>>> @@ -3705,7 +3706,7 @@ static vm_fault_t
>>>>> filemap_map_order0_folio(struct vm_fault *vmf,
>>>>> struct page *page = &folio->page;
>>>>> if (PageHWPoison(page))
>>>>> - return ret;
>>>>> + goto out;
>>>>> /* See comment of filemap_map_folio_range() */
>>>>> if (!folio_test_workingset(folio))
>>>>> @@ -3717,15 +3718,17 @@ static vm_fault_t
>>>>> filemap_map_order0_folio(struct vm_fault *vmf,
>>>>> * the fault-around logic.
>>>>> */
>>>>> if (!pte_none(ptep_get(vmf->pte)))
>>>>> - return ret;
>>>>> + goto out;
>>>>> if (vmf->address == addr)
>>>>> ret = VM_FAULT_NOPAGE;
>>>>> set_pte_range(vmf, folio, page, 1, addr);
>>>>> (*rss)++;
>>>>> - folio_ref_inc(folio);
>>>>> + return ret;
>>>>> +out:
>>>>> + folio_put(folio);
>>>>
>>>> We can use a folio_ref_dec() here
>>>>
>>>> /* Locked folios cannot get truncated. */
>>>> folio_ref_dec(folio);
>>>>
>>>>> return ret;
>>>>> }
>>>>> @@ -3785,7 +3788,6 @@ vm_fault_t filemap_map_pages(struct
>>>>> vm_fault *vmf,
>>>>> nr_pages, &rss, &mmap_miss);
>>>>> folio_unlock(folio);
>>>>> - folio_put(folio);
>>>>> } while ((folio = next_uptodate_folio(&xas, mapping,
>>>>> end_pgoff)) != NULL);
>>>>> add_mm_counter(vma->vm_mm, folio_type, rss);
>>>>> pte_unmap_unlock(vmf->pte, vmf->ptl);
>>>>
>>>>
>>>> I think we can optimize filemap_map_folio_range() as well:
>>>>
>>>> diff --git a/mm/filemap.c b/mm/filemap.c
>>>> index b101405b770ae..d1fcddc72c5f6 100644
>>>> --- a/mm/filemap.c
>>>> +++ b/mm/filemap.c
>>>> @@ -3646,6 +3646,7 @@ static vm_fault_t
>>>> filemap_map_folio_range(struct vm_fault *vmf,
>>>> unsigned long addr, unsigned int nr_pages,
>>>> unsigned long *rss, unsigned short *mmap_miss)
>>>> {
>>>> + bool ref_from_caller = true;
>>>> vm_fault_t ret = 0;
>>>> struct page *page = folio_page(folio, start);
>>>> unsigned int count = 0;
>>>> @@ -3679,7 +3680,9 @@ static vm_fault_t
>>>> filemap_map_folio_range(struct vm_fault *vmf,
>>>> if (count) {
>>>> set_pte_range(vmf, folio, page, count, addr);
>>>> *rss += count;
>>>> - folio_ref_add(folio, count);
>>>> + if (count - ref_from_caller)
>>>> + folio_ref_add(folio, count -
>>>> ref_from_caller);
>>>> + ref_from_caller = false;
>>>> if (in_range(vmf->address, addr, count *
>>>> PAGE_SIZE))
>>>> ret = VM_FAULT_NOPAGE;
>>>> }
>>>> @@ -3694,13 +3697,19 @@ static vm_fault_t
>>>> filemap_map_folio_range(struct vm_fault *vmf,
>>>> if (count) {
>>>> set_pte_range(vmf, folio, page, count, addr);
>>>> *rss += count;
>>>> - folio_ref_add(folio, count);
>>>> + if (count - ref_from_caller)
>>>> + folio_ref_add(folio, count - ref_from_caller);
>>>> + ref_from_caller = false;
>>>> if (in_range(vmf->address, addr, count * PAGE_SIZE))
>>>> ret = VM_FAULT_NOPAGE;
>>>> }
>>>>
>>>> vmf->pte = old_ptep;
>>>>
>>>> + if (ref_from_caller)
>>>> + /* Locked folios cannot get truncated. */
>>>> + folio_ref_dec(folio);
>>>> +
>>>> return ret;
>>>> }
>>>>
>>>>
>>>> It would save at least a folio_ref_dec(), and in corner cases (only
>>>> map a single page)
>>>> also a folio_ref_add().
>>>>
>>> Maybe We can first count the refcount to add, and only call
>>> folio_ref_{add, sub} once before return
>
> I'm not a fan of that, because I'm planning on moving the
> folio_ref_add() before the set_pte_range() so we can minimize the
> number of false positives with our folio_ref_count() !=
> folio_expected_ref_count() checks, and I can sanity check when
> adjusting the mapcount that it is always >= refcount.
>
I see, I will send v3 as the diff sugguested by you.
Thanks.
[-- Attachment #2: Type: text/html, Size: 8540 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-09-04 12:12 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-03 8:42 [PATCH v2] filemap: optimize order0 folio in filemap_map_pages Jinjiang Tu
2025-09-03 9:16 ` David Hildenbrand
2025-09-04 1:05 ` Jinjiang Tu
2025-09-04 1:06 ` Jinjiang Tu
2025-09-04 6:20 ` David Hildenbrand
2025-09-04 12:12 ` Jinjiang Tu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).