* [PATCH 0/5] support shmem mTHP collapse
@ 2024-08-19 8:14 Baolin Wang
2024-08-19 8:14 ` [PATCH 1/5] mm: khugepaged: expand the is_refcount_suitable() to support file folios Baolin Wang
` (4 more replies)
0 siblings, 5 replies; 10+ messages in thread
From: Baolin Wang @ 2024-08-19 8:14 UTC (permalink / raw)
To: akpm
Cc: hughd, willy, david, 21cnbao, ryan.roberts, shy828301, ziy,
baolin.wang, linux-mm, linux-kernel
Hi,
Shmem already supports mTHP allocation[1], and this patch set adds support
for shmem mTHP collapse, as well as adding relevant test cases. Please
help to review. Thanks.
Note: all khugepaged selftests have passed.
[1] https://lore.kernel.org/all/cover.1718090413.git.baolin.wang@linux.alibaba.com/T/#m4bd7e701c7b5f36f712055e4360cad593a22b3bf
Baolin Wang (5):
mm: khugepaged: expand the is_refcount_suitable() to support file
folios
mm: khugepaged: use the number of pages in the folio to check the
reference count
mm: khugepaged: support shmem mTHP copy
mm: khugepaged: support shmem mTHP collapse
selftests: mm: support shmem mTHP collapse testing
mm/khugepaged.c | 60 ++++++++++++-----------
tools/testing/selftests/mm/khugepaged.c | 4 +-
tools/testing/selftests/mm/thp_settings.c | 46 ++++++++++++++---
tools/testing/selftests/mm/thp_settings.h | 9 +++-
4 files changed, 83 insertions(+), 36 deletions(-)
--
2.39.3
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 1/5] mm: khugepaged: expand the is_refcount_suitable() to support file folios
2024-08-19 8:14 [PATCH 0/5] support shmem mTHP collapse Baolin Wang
@ 2024-08-19 8:14 ` Baolin Wang
2024-08-19 8:36 ` David Hildenbrand
2024-08-19 8:14 ` [PATCH 2/5] mm: khugepaged: use the number of pages in the folio to check the reference count Baolin Wang
` (3 subsequent siblings)
4 siblings, 1 reply; 10+ messages in thread
From: Baolin Wang @ 2024-08-19 8:14 UTC (permalink / raw)
To: akpm
Cc: hughd, willy, david, 21cnbao, ryan.roberts, shy828301, ziy,
baolin.wang, linux-mm, linux-kernel
Expand the is_refcount_suitable() to support reference checks for file folios,
as preparation for supporting shmem mTHP collapse.
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
mm/khugepaged.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index cdd1d8655a76..f11b4f172e61 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -549,8 +549,14 @@ static bool is_refcount_suitable(struct folio *folio)
int expected_refcount;
expected_refcount = folio_mapcount(folio);
- if (folio_test_swapcache(folio))
+ if (folio_test_anon(folio)) {
+ expected_refcount += folio_test_swapcache(folio) ?
+ folio_nr_pages(folio) : 0;
+ } else {
expected_refcount += folio_nr_pages(folio);
+ if (folio_test_private(folio))
+ expected_refcount++;
+ }
return folio_ref_count(folio) == expected_refcount;
}
@@ -2285,8 +2291,7 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
break;
}
- if (folio_ref_count(folio) !=
- 1 + folio_mapcount(folio) + folio_test_private(folio)) {
+ if (!is_refcount_suitable(folio)) {
result = SCAN_PAGE_COUNT;
break;
}
--
2.39.3
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 2/5] mm: khugepaged: use the number of pages in the folio to check the reference count
2024-08-19 8:14 [PATCH 0/5] support shmem mTHP collapse Baolin Wang
2024-08-19 8:14 ` [PATCH 1/5] mm: khugepaged: expand the is_refcount_suitable() to support file folios Baolin Wang
@ 2024-08-19 8:14 ` Baolin Wang
2024-08-19 9:40 ` David Hildenbrand
2024-08-19 8:14 ` [PATCH 3/5] mm: khugepaged: support shmem mTHP copy Baolin Wang
` (2 subsequent siblings)
4 siblings, 1 reply; 10+ messages in thread
From: Baolin Wang @ 2024-08-19 8:14 UTC (permalink / raw)
To: akpm
Cc: hughd, willy, david, 21cnbao, ryan.roberts, shy828301, ziy,
baolin.wang, linux-mm, linux-kernel
Use the number of pages in the folio to check the reference count as
preparation for supporting shmem mTHP collapse.
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
mm/khugepaged.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index f11b4f172e61..60d95f08610c 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1994,7 +1994,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
/*
* We control three references to the folio:
* - we hold a pin on it;
- * - one reference from page cache;
+ * - nr_pages reference from page cache;
* - one from lru_isolate_folio;
* If those are the only references, then any new usage
* of the folio will have to fetch it from the page
@@ -2002,7 +2002,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
* truncate, so any new usage will be blocked until we
* unlock folio after collapse/during rollback.
*/
- if (folio_ref_count(folio) != 3) {
+ if (folio_ref_count(folio) != 2 + folio_nr_pages(folio)) {
result = SCAN_PAGE_COUNT;
xas_unlock_irq(&xas);
folio_putback_lru(folio);
@@ -2185,7 +2185,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
folio_clear_active(folio);
folio_clear_unevictable(folio);
folio_unlock(folio);
- folio_put_refs(folio, 3);
+ folio_put_refs(folio, 2 + folio_nr_pages(folio));
}
goto out;
--
2.39.3
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 3/5] mm: khugepaged: support shmem mTHP copy
2024-08-19 8:14 [PATCH 0/5] support shmem mTHP collapse Baolin Wang
2024-08-19 8:14 ` [PATCH 1/5] mm: khugepaged: expand the is_refcount_suitable() to support file folios Baolin Wang
2024-08-19 8:14 ` [PATCH 2/5] mm: khugepaged: use the number of pages in the folio to check the reference count Baolin Wang
@ 2024-08-19 8:14 ` Baolin Wang
2024-08-19 8:14 ` [PATCH 4/5] mm: khugepaged: support shmem mTHP collapse Baolin Wang
2024-08-19 8:14 ` [PATCH 5/5] selftests: mm: support shmem mTHP collapse testing Baolin Wang
4 siblings, 0 replies; 10+ messages in thread
From: Baolin Wang @ 2024-08-19 8:14 UTC (permalink / raw)
To: akpm
Cc: hughd, willy, david, 21cnbao, ryan.roberts, shy828301, ziy,
baolin.wang, linux-mm, linux-kernel
Iterate each subpage in the large folio to copy, as preparation for supporting
shmem mTHP collapse.
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
mm/khugepaged.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 60d95f08610c..91ee672db202 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2060,17 +2060,22 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
index = start;
dst = folio_page(new_folio, 0);
list_for_each_entry(folio, &pagelist, lru) {
+ int i, nr_pages = folio_nr_pages(folio);
+
while (index < folio->index) {
clear_highpage(dst);
index++;
dst++;
}
- if (copy_mc_highpage(dst, folio_page(folio, 0)) > 0) {
- result = SCAN_COPY_MC;
- goto rollback;
+
+ for (i = 0; i < nr_pages; i++) {
+ if (copy_mc_highpage(dst, folio_page(folio, i)) > 0) {
+ result = SCAN_COPY_MC;
+ goto rollback;
+ }
+ index++;
+ dst++;
}
- index++;
- dst++;
}
while (index < end) {
clear_highpage(dst);
--
2.39.3
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 4/5] mm: khugepaged: support shmem mTHP collapse
2024-08-19 8:14 [PATCH 0/5] support shmem mTHP collapse Baolin Wang
` (2 preceding siblings ...)
2024-08-19 8:14 ` [PATCH 3/5] mm: khugepaged: support shmem mTHP copy Baolin Wang
@ 2024-08-19 8:14 ` Baolin Wang
2024-08-19 8:14 ` [PATCH 5/5] selftests: mm: support shmem mTHP collapse testing Baolin Wang
4 siblings, 0 replies; 10+ messages in thread
From: Baolin Wang @ 2024-08-19 8:14 UTC (permalink / raw)
To: akpm
Cc: hughd, willy, david, 21cnbao, ryan.roberts, shy828301, ziy,
baolin.wang, linux-mm, linux-kernel
Shmem already supports the allocation of mTHP, but khugepaged does not yet
support collapsing mTHP folios. Now khugepaged is ready to support mTHP,
and this patch enables the collapse of shmem mTHP.
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
mm/khugepaged.c | 28 +++++++++++-----------------
1 file changed, 11 insertions(+), 17 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 91ee672db202..4b35239b5e46 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1847,7 +1847,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
}
} while (1);
- for (index = start; index < end; index++) {
+ for (index = start; index < end;) {
xas_set(&xas, index);
folio = xas_load(&xas);
@@ -1866,6 +1866,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
}
}
nr_none++;
+ index++;
continue;
}
@@ -1947,12 +1948,10 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
* we locked the first folio, then a THP might be there already.
* This will be discovered on the first iteration.
*/
- if (folio_test_large(folio)) {
- result = folio_order(folio) == HPAGE_PMD_ORDER &&
- folio->index == start
- /* Maybe PMD-mapped */
- ? SCAN_PTE_MAPPED_HUGEPAGE
- : SCAN_PAGE_COMPOUND;
+ if (folio_order(folio) == HPAGE_PMD_ORDER &&
+ folio->index == start) {
+ /* Maybe PMD-mapped */
+ result = SCAN_PTE_MAPPED_HUGEPAGE;
goto out_unlock;
}
@@ -2013,6 +2012,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
* Accumulate the folios that are being collapsed.
*/
list_add_tail(&folio->lru, &pagelist);
+ index += folio_nr_pages(folio);
continue;
out_unlock:
folio_unlock(folio);
@@ -2265,16 +2265,10 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
continue;
}
- /*
- * TODO: khugepaged should compact smaller compound pages
- * into a PMD sized page
- */
- if (folio_test_large(folio)) {
- result = folio_order(folio) == HPAGE_PMD_ORDER &&
- folio->index == start
- /* Maybe PMD-mapped */
- ? SCAN_PTE_MAPPED_HUGEPAGE
- : SCAN_PAGE_COMPOUND;
+ if (folio_order(folio) == HPAGE_PMD_ORDER &&
+ folio->index == start) {
+ /* Maybe PMD-mapped */
+ result = SCAN_PTE_MAPPED_HUGEPAGE;
/*
* For SCAN_PTE_MAPPED_HUGEPAGE, further processing
* by the caller won't touch the page cache, and so
--
2.39.3
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 5/5] selftests: mm: support shmem mTHP collapse testing
2024-08-19 8:14 [PATCH 0/5] support shmem mTHP collapse Baolin Wang
` (3 preceding siblings ...)
2024-08-19 8:14 ` [PATCH 4/5] mm: khugepaged: support shmem mTHP collapse Baolin Wang
@ 2024-08-19 8:14 ` Baolin Wang
4 siblings, 0 replies; 10+ messages in thread
From: Baolin Wang @ 2024-08-19 8:14 UTC (permalink / raw)
To: akpm
Cc: hughd, willy, david, 21cnbao, ryan.roberts, shy828301, ziy,
baolin.wang, linux-mm, linux-kernel
Add shmem mTHP collpase testing. Similar to the anonymous page, users can use
the '-s' parameter to specify the shmem mTHP size for testing.
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
tools/testing/selftests/mm/khugepaged.c | 4 +-
tools/testing/selftests/mm/thp_settings.c | 46 ++++++++++++++++++++---
tools/testing/selftests/mm/thp_settings.h | 9 ++++-
3 files changed, 51 insertions(+), 8 deletions(-)
diff --git a/tools/testing/selftests/mm/khugepaged.c b/tools/testing/selftests/mm/khugepaged.c
index 829320a519e7..56d4480e8d3c 100644
--- a/tools/testing/selftests/mm/khugepaged.c
+++ b/tools/testing/selftests/mm/khugepaged.c
@@ -1095,7 +1095,7 @@ static void usage(void)
fprintf(stderr, "\n\tSupported Options:\n");
fprintf(stderr, "\t\t-h: This help message.\n");
fprintf(stderr, "\t\t-s: mTHP size, expressed as page order.\n");
- fprintf(stderr, "\t\t Defaults to 0. Use this size for anon allocations.\n");
+ fprintf(stderr, "\t\t Defaults to 0. Use this size for anon or shmem allocations.\n");
exit(1);
}
@@ -1209,6 +1209,8 @@ int main(int argc, char **argv)
default_settings.khugepaged.pages_to_scan = hpage_pmd_nr * 8;
default_settings.hugepages[hpage_pmd_order].enabled = THP_INHERIT;
default_settings.hugepages[anon_order].enabled = THP_ALWAYS;
+ default_settings.shmem_hugepages[hpage_pmd_order].enabled = SHMEM_INHERIT;
+ default_settings.shmem_hugepages[anon_order].enabled = SHMEM_ALWAYS;
save_settings();
thp_push_settings(&default_settings);
diff --git a/tools/testing/selftests/mm/thp_settings.c b/tools/testing/selftests/mm/thp_settings.c
index a4163438108e..577eaab6266f 100644
--- a/tools/testing/selftests/mm/thp_settings.c
+++ b/tools/testing/selftests/mm/thp_settings.c
@@ -33,10 +33,11 @@ static const char * const thp_defrag_strings[] = {
};
static const char * const shmem_enabled_strings[] = {
+ "never",
"always",
"within_size",
"advise",
- "never",
+ "inherit",
"deny",
"force",
NULL
@@ -200,6 +201,7 @@ void thp_write_num(const char *name, unsigned long num)
void thp_read_settings(struct thp_settings *settings)
{
unsigned long orders = thp_supported_orders();
+ unsigned long shmem_orders = thp_shmem_supported_orders();
char path[PATH_MAX];
int i;
@@ -234,12 +236,24 @@ void thp_read_settings(struct thp_settings *settings)
settings->hugepages[i].enabled =
thp_read_string(path, thp_enabled_strings);
}
+
+ for (i = 0; i < NR_ORDERS; i++) {
+ if (!((1 << i) & shmem_orders)) {
+ settings->shmem_hugepages[i].enabled = SHMEM_NEVER;
+ continue;
+ }
+ snprintf(path, PATH_MAX, "hugepages-%ukB/shmem_enabled",
+ (getpagesize() >> 10) << i);
+ settings->shmem_hugepages[i].enabled =
+ thp_read_string(path, shmem_enabled_strings);
+ }
}
void thp_write_settings(struct thp_settings *settings)
{
struct khugepaged_settings *khugepaged = &settings->khugepaged;
unsigned long orders = thp_supported_orders();
+ unsigned long shmem_orders = thp_shmem_supported_orders();
char path[PATH_MAX];
int enabled;
int i;
@@ -271,6 +285,15 @@ void thp_write_settings(struct thp_settings *settings)
enabled = settings->hugepages[i].enabled;
thp_write_string(path, thp_enabled_strings[enabled]);
}
+
+ for (i = 0; i < NR_ORDERS; i++) {
+ if (!((1 << i) & shmem_orders))
+ continue;
+ snprintf(path, PATH_MAX, "hugepages-%ukB/shmem_enabled",
+ (getpagesize() >> 10) << i);
+ enabled = settings->shmem_hugepages[i].enabled;
+ thp_write_string(path, shmem_enabled_strings[enabled]);
+ }
}
struct thp_settings *thp_current_settings(void)
@@ -324,17 +347,18 @@ void thp_set_read_ahead_path(char *path)
dev_queue_read_ahead_path[sizeof(dev_queue_read_ahead_path) - 1] = '\0';
}
-unsigned long thp_supported_orders(void)
+static unsigned long __thp_supported_orders(bool is_shmem)
{
unsigned long orders = 0;
char path[PATH_MAX];
char buf[256];
- int ret;
- int i;
+ int ret, i;
+ char anon_dir[] = "enabled";
+ char shmem_dir[] = "shmem_enabled";
for (i = 0; i < NR_ORDERS; i++) {
- ret = snprintf(path, PATH_MAX, THP_SYSFS "hugepages-%ukB/enabled",
- (getpagesize() >> 10) << i);
+ ret = snprintf(path, PATH_MAX, THP_SYSFS "hugepages-%ukB/%s",
+ (getpagesize() >> 10) << i, is_shmem ? shmem_dir : anon_dir);
if (ret >= PATH_MAX) {
printf("%s: Pathname is too long\n", __func__);
exit(EXIT_FAILURE);
@@ -347,3 +371,13 @@ unsigned long thp_supported_orders(void)
return orders;
}
+
+unsigned long thp_supported_orders(void)
+{
+ return __thp_supported_orders(false);
+}
+
+unsigned long thp_shmem_supported_orders(void)
+{
+ return __thp_supported_orders(true);
+}
diff --git a/tools/testing/selftests/mm/thp_settings.h b/tools/testing/selftests/mm/thp_settings.h
index 71cbff05f4c7..876235a23460 100644
--- a/tools/testing/selftests/mm/thp_settings.h
+++ b/tools/testing/selftests/mm/thp_settings.h
@@ -22,10 +22,11 @@ enum thp_defrag {
};
enum shmem_enabled {
+ SHMEM_NEVER,
SHMEM_ALWAYS,
SHMEM_WITHIN_SIZE,
SHMEM_ADVISE,
- SHMEM_NEVER,
+ SHMEM_INHERIT,
SHMEM_DENY,
SHMEM_FORCE,
};
@@ -46,6 +47,10 @@ struct khugepaged_settings {
unsigned long pages_to_scan;
};
+struct shmem_hugepages_settings {
+ enum shmem_enabled enabled;
+};
+
struct thp_settings {
enum thp_enabled thp_enabled;
enum thp_defrag thp_defrag;
@@ -54,6 +59,7 @@ struct thp_settings {
struct khugepaged_settings khugepaged;
unsigned long read_ahead_kb;
struct hugepages_settings hugepages[NR_ORDERS];
+ struct shmem_hugepages_settings shmem_hugepages[NR_ORDERS];
};
int read_file(const char *path, char *buf, size_t buflen);
@@ -76,5 +82,6 @@ void thp_save_settings(void);
void thp_set_read_ahead_path(char *path);
unsigned long thp_supported_orders(void);
+unsigned long thp_shmem_supported_orders(void);
#endif /* __THP_SETTINGS_H__ */
--
2.39.3
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 1/5] mm: khugepaged: expand the is_refcount_suitable() to support file folios
2024-08-19 8:14 ` [PATCH 1/5] mm: khugepaged: expand the is_refcount_suitable() to support file folios Baolin Wang
@ 2024-08-19 8:36 ` David Hildenbrand
2024-08-19 8:42 ` Baolin Wang
0 siblings, 1 reply; 10+ messages in thread
From: David Hildenbrand @ 2024-08-19 8:36 UTC (permalink / raw)
To: Baolin Wang, akpm
Cc: hughd, willy, 21cnbao, ryan.roberts, shy828301, ziy, linux-mm,
linux-kernel
On 19.08.24 10:14, Baolin Wang wrote:
> Expand the is_refcount_suitable() to support reference checks for file folios,
> as preparation for supporting shmem mTHP collapse.
>
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
> mm/khugepaged.c | 11 ++++++++---
> 1 file changed, 8 insertions(+), 3 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index cdd1d8655a76..f11b4f172e61 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -549,8 +549,14 @@ static bool is_refcount_suitable(struct folio *folio)
> int expected_refcount;
>
> expected_refcount = folio_mapcount(folio);
> - if (folio_test_swapcache(folio))
> + if (folio_test_anon(folio)) {
> + expected_refcount += folio_test_swapcache(folio) ?
> + folio_nr_pages(folio) : 0;
> + } else {
> expected_refcount += folio_nr_pages(folio);
> + if (folio_test_private(folio))
> + expected_refcount++;
> + }
Alternatively, a bit neater
if (!folio_test_anon(folio) || folio_test_swapcache(folio))
expected_refcount += folio_nr_pages(folio);
if (folio_test_private(folio))
expected_refcount++;
The latter check should be fine even for anon folios (although always false)
>
> return folio_ref_count(folio) == expected_refcount;
> }
> @@ -2285,8 +2291,7 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
> break;
> }
>
> - if (folio_ref_count(folio) !=
> - 1 + folio_mapcount(folio) + folio_test_private(folio)) {
The "1" is due to the pagecache, right? IIUC, we don't hold a raised
folio refcount as we do the xas_for_each().
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/5] mm: khugepaged: expand the is_refcount_suitable() to support file folios
2024-08-19 8:36 ` David Hildenbrand
@ 2024-08-19 8:42 ` Baolin Wang
0 siblings, 0 replies; 10+ messages in thread
From: Baolin Wang @ 2024-08-19 8:42 UTC (permalink / raw)
To: David Hildenbrand, akpm
Cc: hughd, willy, 21cnbao, ryan.roberts, shy828301, ziy, linux-mm,
linux-kernel
On 2024/8/19 16:36, David Hildenbrand wrote:
> On 19.08.24 10:14, Baolin Wang wrote:
>> Expand the is_refcount_suitable() to support reference checks for file
>> folios,
>> as preparation for supporting shmem mTHP collapse.
>>
>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>> ---
>> mm/khugepaged.c | 11 ++++++++---
>> 1 file changed, 8 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index cdd1d8655a76..f11b4f172e61 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -549,8 +549,14 @@ static bool is_refcount_suitable(struct folio
>> *folio)
>> int expected_refcount;
>> expected_refcount = folio_mapcount(folio);
>> - if (folio_test_swapcache(folio))
>> + if (folio_test_anon(folio)) {
>> + expected_refcount += folio_test_swapcache(folio) ?
>> + folio_nr_pages(folio) : 0;
>> + } else {
>> expected_refcount += folio_nr_pages(folio);
>> + if (folio_test_private(folio))
>> + expected_refcount++;
>> + }
>
> Alternatively, a bit neater
>
> if (!folio_test_anon(folio) || folio_test_swapcache(folio))
> expected_refcount += folio_nr_pages(folio);
> if (folio_test_private(folio))
> expected_refcount++;
>
> The latter check should be fine even for anon folios (although always
> false)
Looks better. Will do in v2.
>> return folio_ref_count(folio) == expected_refcount;
>> }
>> @@ -2285,8 +2291,7 @@ static int hpage_collapse_scan_file(struct
>> mm_struct *mm, unsigned long addr,
>> break;
>> }
>> - if (folio_ref_count(folio) !=
>> - 1 + folio_mapcount(folio) + folio_test_private(folio)) {
>
> The "1" is due to the pagecache, right? IIUC, we don't hold a raised
> folio refcount as we do the xas_for_each().
Right.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/5] mm: khugepaged: use the number of pages in the folio to check the reference count
2024-08-19 8:14 ` [PATCH 2/5] mm: khugepaged: use the number of pages in the folio to check the reference count Baolin Wang
@ 2024-08-19 9:40 ` David Hildenbrand
2024-08-19 10:05 ` Baolin Wang
0 siblings, 1 reply; 10+ messages in thread
From: David Hildenbrand @ 2024-08-19 9:40 UTC (permalink / raw)
To: Baolin Wang, akpm
Cc: hughd, willy, 21cnbao, ryan.roberts, shy828301, ziy, linux-mm,
linux-kernel
On 19.08.24 10:14, Baolin Wang wrote:
> Use the number of pages in the folio to check the reference count as
> preparation for supporting shmem mTHP collapse.
>
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
> mm/khugepaged.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index f11b4f172e61..60d95f08610c 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1994,7 +1994,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
> /*
> * We control three references to the folio:
^ "three" is wrong now.
> * - we hold a pin on it;
> - * - one reference from page cache;
> + * - nr_pages reference from page cache;
> * - one from lru_isolate_folio;
> * If those are the only references, then any new usage
> * of the folio will have to fetch it from the page
> @@ -2002,7 +2002,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
> * truncate, so any new usage will be blocked until we
> * unlock folio after collapse/during rollback.
> */
> - if (folio_ref_count(folio) != 3) {
> + if (folio_ref_count(folio) != 2 + folio_nr_pages(folio)) {
> result = SCAN_PAGE_COUNT;
> xas_unlock_irq(&xas);
> folio_putback_lru(folio);
> @@ -2185,7 +2185,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
> folio_clear_active(folio);
> folio_clear_unevictable(folio);
> folio_unlock(folio);
> - folio_put_refs(folio, 3);
> + folio_put_refs(folio, 2 + folio_nr_pages(folio));
> }
>
> goto out;
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/5] mm: khugepaged: use the number of pages in the folio to check the reference count
2024-08-19 9:40 ` David Hildenbrand
@ 2024-08-19 10:05 ` Baolin Wang
0 siblings, 0 replies; 10+ messages in thread
From: Baolin Wang @ 2024-08-19 10:05 UTC (permalink / raw)
To: David Hildenbrand, akpm
Cc: hughd, willy, 21cnbao, ryan.roberts, shy828301, ziy, linux-mm,
linux-kernel
On 2024/8/19 17:40, David Hildenbrand wrote:
> On 19.08.24 10:14, Baolin Wang wrote:
>> Use the number of pages in the folio to check the reference count as
>> preparation for supporting shmem mTHP collapse.
>>
>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>> ---
>> mm/khugepaged.c | 6 +++---
>> 1 file changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index f11b4f172e61..60d95f08610c 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -1994,7 +1994,7 @@ static int collapse_file(struct mm_struct *mm,
>> unsigned long addr,
>> /*
>> * We control three references to the folio:
>
> ^ "three" is wrong now.
Ah, good catch. Will change to '2 + nr_pages'.
>
>> * - we hold a pin on it;
>> - * - one reference from page cache;
>> + * - nr_pages reference from page cache;
>> * - one from lru_isolate_folio;
>> * If those are the only references, then any new usage
>> * of the folio will have to fetch it from the page
>> @@ -2002,7 +2002,7 @@ static int collapse_file(struct mm_struct *mm,
>> unsigned long addr,
>> * truncate, so any new usage will be blocked until we
>> * unlock folio after collapse/during rollback.
>> */
>> - if (folio_ref_count(folio) != 3) {
>> + if (folio_ref_count(folio) != 2 + folio_nr_pages(folio)) {
>> result = SCAN_PAGE_COUNT;
>> xas_unlock_irq(&xas);
>> folio_putback_lru(folio);
>> @@ -2185,7 +2185,7 @@ static int collapse_file(struct mm_struct *mm,
>> unsigned long addr,
>> folio_clear_active(folio);
>> folio_clear_unevictable(folio);
>> folio_unlock(folio);
>> - folio_put_refs(folio, 3);
>> + folio_put_refs(folio, 2 + folio_nr_pages(folio));
>> }
>> goto out;
>
> Acked-by: David Hildenbrand <david@redhat.com>
Thanks for reviewing.
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2024-08-19 10:05 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-19 8:14 [PATCH 0/5] support shmem mTHP collapse Baolin Wang
2024-08-19 8:14 ` [PATCH 1/5] mm: khugepaged: expand the is_refcount_suitable() to support file folios Baolin Wang
2024-08-19 8:36 ` David Hildenbrand
2024-08-19 8:42 ` Baolin Wang
2024-08-19 8:14 ` [PATCH 2/5] mm: khugepaged: use the number of pages in the folio to check the reference count Baolin Wang
2024-08-19 9:40 ` David Hildenbrand
2024-08-19 10:05 ` Baolin Wang
2024-08-19 8:14 ` [PATCH 3/5] mm: khugepaged: support shmem mTHP copy Baolin Wang
2024-08-19 8:14 ` [PATCH 4/5] mm: khugepaged: support shmem mTHP collapse Baolin Wang
2024-08-19 8:14 ` [PATCH 5/5] selftests: mm: support shmem mTHP collapse testing Baolin Wang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).