* [PATCH v2 0/3] Some cleanups for shmem
@ 2024-07-13 13:24 Baolin Wang
2024-07-13 13:24 ` [PATCH v2 1/3] mm: shmem: simplify the suitable huge orders validation for tmpfs Baolin Wang
` (3 more replies)
0 siblings, 4 replies; 14+ messages in thread
From: Baolin Wang @ 2024-07-13 13:24 UTC (permalink / raw)
To: akpm, hughd
Cc: willy, david, 21cnbao, ryan.roberts, ziy, ioworker0, baolin.wang,
linux-mm, linux-kernel
Hi,
This series does some cleanups to reuse code, rename functions and simplify
logic to make code more clear. No functional changes are expected.
Changes from v1:
- Add a dummy function in case CONFIG_TRANSPARENT_HUGEPAGE is not
enabled, which fixes a building error reported by kernel test robot.
Baolin Wang (3):
mm: shmem: simplify the suitable huge orders validation for tmpfs
mm: shmem: rename shmem_is_huge() to shmem_huge_global_enabled()
mm: shmem: move shmem_huge_global_enabled() into
shmem_allowable_huge_orders()
include/linux/shmem_fs.h | 11 +----
mm/huge_memory.c | 11 ++---
mm/shmem.c | 91 +++++++++++++++++++++-------------------
3 files changed, 53 insertions(+), 60 deletions(-)
--
2.39.3
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v2 1/3] mm: shmem: simplify the suitable huge orders validation for tmpfs
2024-07-13 13:24 [PATCH v2 0/3] Some cleanups for shmem Baolin Wang
@ 2024-07-13 13:24 ` Baolin Wang
2024-07-15 13:30 ` Ryan Roberts
2024-07-25 13:07 ` David Hildenbrand
2024-07-13 13:24 ` [PATCH v2 2/3] mm: shmem: rename shmem_is_huge() to shmem_huge_global_enabled() Baolin Wang
` (2 subsequent siblings)
3 siblings, 2 replies; 14+ messages in thread
From: Baolin Wang @ 2024-07-13 13:24 UTC (permalink / raw)
To: akpm, hughd
Cc: willy, david, 21cnbao, ryan.roberts, ziy, ioworker0, baolin.wang,
linux-mm, linux-kernel
Move the suitable huge orders validation into shmem_suitable_orders() for
tmpfs, which can reuse some code to simplify the logic.
In addition, we don't have special handling for the error code -E2BIG when
checking for conflicts with PMD sized THP in the pagecache for tmpfs, instead,
it will just fallback to order-0 allocations like this patch does, so this
simplification will not add functional changes.
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
mm/shmem.c | 39 +++++++++++++++------------------------
1 file changed, 15 insertions(+), 24 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index f24dfbd387ba..db7e9808830f 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1685,19 +1685,29 @@ static unsigned long shmem_suitable_orders(struct inode *inode, struct vm_fault
struct address_space *mapping, pgoff_t index,
unsigned long orders)
{
- struct vm_area_struct *vma = vmf->vma;
+ struct vm_area_struct *vma = vmf ? vmf->vma : NULL;
unsigned long pages;
int order;
- orders = thp_vma_suitable_orders(vma, vmf->address, orders);
- if (!orders)
- return 0;
+ if (vma) {
+ orders = thp_vma_suitable_orders(vma, vmf->address, orders);
+ if (!orders)
+ return 0;
+ }
/* Find the highest order that can add into the page cache */
order = highest_order(orders);
while (orders) {
pages = 1UL << order;
index = round_down(index, pages);
+ /*
+ * Check for conflict before waiting on a huge allocation.
+ * Conflict might be that a huge page has just been allocated
+ * and added to page cache by a racing thread, or that there
+ * is already at least one small page in the huge extent.
+ * Be careful to retry when appropriate, but not forever!
+ * Elsewhere -EEXIST would be the right code, but not here.
+ */
if (!xa_find(&mapping->i_pages, &index,
index + pages - 1, XA_PRESENT))
break;
@@ -1735,7 +1745,6 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
{
struct address_space *mapping = inode->i_mapping;
struct shmem_inode_info *info = SHMEM_I(inode);
- struct vm_area_struct *vma = vmf ? vmf->vma : NULL;
unsigned long suitable_orders = 0;
struct folio *folio = NULL;
long pages;
@@ -1745,26 +1754,8 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
orders = 0;
if (orders > 0) {
- if (vma && vma_is_anon_shmem(vma)) {
- suitable_orders = shmem_suitable_orders(inode, vmf,
+ suitable_orders = shmem_suitable_orders(inode, vmf,
mapping, index, orders);
- } else if (orders & BIT(HPAGE_PMD_ORDER)) {
- pages = HPAGE_PMD_NR;
- suitable_orders = BIT(HPAGE_PMD_ORDER);
- index = round_down(index, HPAGE_PMD_NR);
-
- /*
- * Check for conflict before waiting on a huge allocation.
- * Conflict might be that a huge page has just been allocated
- * and added to page cache by a racing thread, or that there
- * is already at least one small page in the huge extent.
- * Be careful to retry when appropriate, but not forever!
- * Elsewhere -EEXIST would be the right code, but not here.
- */
- if (xa_find(&mapping->i_pages, &index,
- index + HPAGE_PMD_NR - 1, XA_PRESENT))
- return ERR_PTR(-E2BIG);
- }
order = highest_order(suitable_orders);
while (suitable_orders) {
--
2.39.3
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 2/3] mm: shmem: rename shmem_is_huge() to shmem_huge_global_enabled()
2024-07-13 13:24 [PATCH v2 0/3] Some cleanups for shmem Baolin Wang
2024-07-13 13:24 ` [PATCH v2 1/3] mm: shmem: simplify the suitable huge orders validation for tmpfs Baolin Wang
@ 2024-07-13 13:24 ` Baolin Wang
2024-07-15 13:32 ` Ryan Roberts
2024-07-25 13:08 ` David Hildenbrand
2024-07-13 13:24 ` [PATCH v2 3/3] mm: shmem: move shmem_huge_global_enabled() into shmem_allowable_huge_orders() Baolin Wang
2024-07-24 19:14 ` [PATCH v2 0/3] Some cleanups for shmem Andrew Morton
3 siblings, 2 replies; 14+ messages in thread
From: Baolin Wang @ 2024-07-13 13:24 UTC (permalink / raw)
To: akpm, hughd
Cc: willy, david, 21cnbao, ryan.roberts, ziy, ioworker0, baolin.wang,
linux-mm, linux-kernel
The shmem_is_huge() is now used to check if the top-level huge page is enabled,
thus rename it to reflect its usage.
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
include/linux/shmem_fs.h | 9 +++++----
mm/huge_memory.c | 5 +++--
mm/shmem.c | 15 ++++++++-------
3 files changed, 16 insertions(+), 13 deletions(-)
diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
index 1d06b1e5408a..405ee8d3589a 100644
--- a/include/linux/shmem_fs.h
+++ b/include/linux/shmem_fs.h
@@ -111,14 +111,15 @@ extern void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end);
int shmem_unuse(unsigned int type);
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
-extern bool shmem_is_huge(struct inode *inode, pgoff_t index, bool shmem_huge_force,
- struct mm_struct *mm, unsigned long vm_flags);
+extern bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index, bool shmem_huge_force,
+ struct mm_struct *mm, unsigned long vm_flags);
unsigned long shmem_allowable_huge_orders(struct inode *inode,
struct vm_area_struct *vma, pgoff_t index,
bool global_huge);
#else
-static __always_inline bool shmem_is_huge(struct inode *inode, pgoff_t index, bool shmem_huge_force,
- struct mm_struct *mm, unsigned long vm_flags)
+static __always_inline bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
+ bool shmem_huge_force, struct mm_struct *mm,
+ unsigned long vm_flags)
{
return false;
}
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index f9696c94e211..cc9bad12be75 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -152,8 +152,9 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
* own flags.
*/
if (!in_pf && shmem_file(vma->vm_file)) {
- bool global_huge = shmem_is_huge(file_inode(vma->vm_file), vma->vm_pgoff,
- !enforce_sysfs, vma->vm_mm, vm_flags);
+ bool global_huge = shmem_huge_global_enabled(file_inode(vma->vm_file),
+ vma->vm_pgoff, !enforce_sysfs,
+ vma->vm_mm, vm_flags);
if (!vma_is_anon_shmem(vma))
return global_huge ? orders : 0;
diff --git a/mm/shmem.c b/mm/shmem.c
index db7e9808830f..1445dcd39b6f 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -548,9 +548,9 @@ static bool shmem_confirm_swap(struct address_space *mapping,
static int shmem_huge __read_mostly = SHMEM_HUGE_NEVER;
-static bool __shmem_is_huge(struct inode *inode, pgoff_t index,
- bool shmem_huge_force, struct mm_struct *mm,
- unsigned long vm_flags)
+static bool __shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
+ bool shmem_huge_force, struct mm_struct *mm,
+ unsigned long vm_flags)
{
loff_t i_size;
@@ -581,14 +581,15 @@ static bool __shmem_is_huge(struct inode *inode, pgoff_t index,
}
}
-bool shmem_is_huge(struct inode *inode, pgoff_t index,
+bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
bool shmem_huge_force, struct mm_struct *mm,
unsigned long vm_flags)
{
if (HPAGE_PMD_ORDER > MAX_PAGECACHE_ORDER)
return false;
- return __shmem_is_huge(inode, index, shmem_huge_force, mm, vm_flags);
+ return __shmem_huge_global_enabled(inode, index, shmem_huge_force,
+ mm, vm_flags);
}
#if defined(CONFIG_SYSFS)
@@ -1156,7 +1157,7 @@ static int shmem_getattr(struct mnt_idmap *idmap,
STATX_ATTR_NODUMP);
generic_fillattr(idmap, request_mask, inode, stat);
- if (shmem_is_huge(inode, 0, false, NULL, 0))
+ if (shmem_huge_global_enabled(inode, 0, false, NULL, 0))
stat->blksize = HPAGE_PMD_SIZE;
if (request_mask & STATX_BTIME) {
@@ -2153,7 +2154,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index,
return 0;
}
- huge = shmem_is_huge(inode, index, false, fault_mm,
+ huge = shmem_huge_global_enabled(inode, index, false, fault_mm,
vma ? vma->vm_flags : 0);
/* Find hugepage orders that are allowed for anonymous shmem. */
if (vma && vma_is_anon_shmem(vma))
--
2.39.3
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 3/3] mm: shmem: move shmem_huge_global_enabled() into shmem_allowable_huge_orders()
2024-07-13 13:24 [PATCH v2 0/3] Some cleanups for shmem Baolin Wang
2024-07-13 13:24 ` [PATCH v2 1/3] mm: shmem: simplify the suitable huge orders validation for tmpfs Baolin Wang
2024-07-13 13:24 ` [PATCH v2 2/3] mm: shmem: rename shmem_is_huge() to shmem_huge_global_enabled() Baolin Wang
@ 2024-07-13 13:24 ` Baolin Wang
2024-07-15 13:36 ` Ryan Roberts
2024-07-24 19:14 ` [PATCH v2 0/3] Some cleanups for shmem Andrew Morton
3 siblings, 1 reply; 14+ messages in thread
From: Baolin Wang @ 2024-07-13 13:24 UTC (permalink / raw)
To: akpm, hughd
Cc: willy, david, 21cnbao, ryan.roberts, ziy, ioworker0, baolin.wang,
linux-mm, linux-kernel
Move shmem_huge_global_enabled() into the shmem_allowable_huge_orders() function,
so that shmem_allowable_huge_orders() can also help to find the allowable huge
orders for tmpfs. Moreover the shmem_huge_global_enabled() can become static.
No functional changes.
Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
include/linux/shmem_fs.h | 12 ++----------
mm/huge_memory.c | 12 +++---------
mm/shmem.c | 41 ++++++++++++++++++++++++++--------------
3 files changed, 32 insertions(+), 33 deletions(-)
diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
index 405ee8d3589a..1564d7d3ca61 100644
--- a/include/linux/shmem_fs.h
+++ b/include/linux/shmem_fs.h
@@ -111,21 +111,13 @@ extern void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end);
int shmem_unuse(unsigned int type);
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
-extern bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index, bool shmem_huge_force,
- struct mm_struct *mm, unsigned long vm_flags);
unsigned long shmem_allowable_huge_orders(struct inode *inode,
struct vm_area_struct *vma, pgoff_t index,
- bool global_huge);
+ bool shmem_huge_force);
#else
-static __always_inline bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
- bool shmem_huge_force, struct mm_struct *mm,
- unsigned long vm_flags)
-{
- return false;
-}
static inline unsigned long shmem_allowable_huge_orders(struct inode *inode,
struct vm_area_struct *vma, pgoff_t index,
- bool global_huge)
+ bool shmem_huge_force)
{
return 0;
}
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index cc9bad12be75..f69980b5b5fc 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -151,16 +151,10 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
* Must be done before hugepage flags check since shmem has its
* own flags.
*/
- if (!in_pf && shmem_file(vma->vm_file)) {
- bool global_huge = shmem_huge_global_enabled(file_inode(vma->vm_file),
- vma->vm_pgoff, !enforce_sysfs,
- vma->vm_mm, vm_flags);
-
- if (!vma_is_anon_shmem(vma))
- return global_huge ? orders : 0;
+ if (!in_pf && shmem_file(vma->vm_file))
return shmem_allowable_huge_orders(file_inode(vma->vm_file),
- vma, vma->vm_pgoff, global_huge);
- }
+ vma, vma->vm_pgoff,
+ !enforce_sysfs);
if (!vma_is_anonymous(vma)) {
/*
diff --git a/mm/shmem.c b/mm/shmem.c
index 1445dcd39b6f..4d274f5a17d9 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -581,7 +581,7 @@ static bool __shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
}
}
-bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
+static bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
bool shmem_huge_force, struct mm_struct *mm,
unsigned long vm_flags)
{
@@ -772,6 +772,13 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo,
{
return 0;
}
+
+static bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
+ bool shmem_huge_force, struct mm_struct *mm,
+ unsigned long vm_flags)
+{
+ return false;
+}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
/*
@@ -1625,27 +1632,39 @@ static gfp_t limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp)
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
unsigned long shmem_allowable_huge_orders(struct inode *inode,
struct vm_area_struct *vma, pgoff_t index,
- bool global_huge)
+ bool shmem_huge_force)
{
unsigned long mask = READ_ONCE(huge_shmem_orders_always);
unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size);
- unsigned long vm_flags = vma->vm_flags;
+ unsigned long vm_flags = vma ? vma->vm_flags : 0;
+ struct mm_struct *fault_mm = vma ? vma->vm_mm : NULL;
/*
* Check all the (large) orders below HPAGE_PMD_ORDER + 1 that
* are enabled for this vma.
*/
unsigned long orders = BIT(PMD_ORDER + 1) - 1;
+ bool global_huge;
loff_t i_size;
int order;
- if ((vm_flags & VM_NOHUGEPAGE) ||
- test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))
+ if (vma && ((vm_flags & VM_NOHUGEPAGE) ||
+ test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)))
return 0;
/* If the hardware/firmware marked hugepage support disabled. */
if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_UNSUPPORTED))
return 0;
+ global_huge = shmem_huge_global_enabled(inode, index, shmem_huge_force,
+ fault_mm, vm_flags);
+ if (!vma || !vma_is_anon_shmem(vma)) {
+ /*
+ * For tmpfs, we now only support PMD sized THP if huge page
+ * is enabled, otherwise fallback to order 0.
+ */
+ return global_huge ? BIT(HPAGE_PMD_ORDER) : 0;
+ }
+
/*
* Following the 'deny' semantics of the top level, force the huge
* option off from all mounts.
@@ -2081,7 +2100,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index,
struct mm_struct *fault_mm;
struct folio *folio;
int error;
- bool alloced, huge;
+ bool alloced;
unsigned long orders = 0;
if (WARN_ON_ONCE(!shmem_mapping(inode->i_mapping)))
@@ -2154,14 +2173,8 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index,
return 0;
}
- huge = shmem_huge_global_enabled(inode, index, false, fault_mm,
- vma ? vma->vm_flags : 0);
- /* Find hugepage orders that are allowed for anonymous shmem. */
- if (vma && vma_is_anon_shmem(vma))
- orders = shmem_allowable_huge_orders(inode, vma, index, huge);
- else if (huge)
- orders = BIT(HPAGE_PMD_ORDER);
-
+ /* Find hugepage orders that are allowed for anonymous shmem and tmpfs. */
+ orders = shmem_allowable_huge_orders(inode, vma, index, false);
if (orders > 0) {
gfp_t huge_gfp;
--
2.39.3
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH v2 1/3] mm: shmem: simplify the suitable huge orders validation for tmpfs
2024-07-13 13:24 ` [PATCH v2 1/3] mm: shmem: simplify the suitable huge orders validation for tmpfs Baolin Wang
@ 2024-07-15 13:30 ` Ryan Roberts
2024-07-25 13:07 ` David Hildenbrand
1 sibling, 0 replies; 14+ messages in thread
From: Ryan Roberts @ 2024-07-15 13:30 UTC (permalink / raw)
To: Baolin Wang, akpm, hughd
Cc: willy, david, 21cnbao, ziy, ioworker0, linux-mm, linux-kernel
On 13/07/2024 14:24, Baolin Wang wrote:
> Move the suitable huge orders validation into shmem_suitable_orders() for
> tmpfs, which can reuse some code to simplify the logic.
>
> In addition, we don't have special handling for the error code -E2BIG when
> checking for conflicts with PMD sized THP in the pagecache for tmpfs, instead,
> it will just fallback to order-0 allocations like this patch does, so this
> simplification will not add functional changes.
>
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
> mm/shmem.c | 39 +++++++++++++++------------------------
> 1 file changed, 15 insertions(+), 24 deletions(-)
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index f24dfbd387ba..db7e9808830f 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1685,19 +1685,29 @@ static unsigned long shmem_suitable_orders(struct inode *inode, struct vm_fault
> struct address_space *mapping, pgoff_t index,
> unsigned long orders)
> {
> - struct vm_area_struct *vma = vmf->vma;
> + struct vm_area_struct *vma = vmf ? vmf->vma : NULL;
> unsigned long pages;
> int order;
>
> - orders = thp_vma_suitable_orders(vma, vmf->address, orders);
> - if (!orders)
> - return 0;
> + if (vma) {
> + orders = thp_vma_suitable_orders(vma, vmf->address, orders);
> + if (!orders)
> + return 0;
> + }
>
> /* Find the highest order that can add into the page cache */
> order = highest_order(orders);
> while (orders) {
> pages = 1UL << order;
> index = round_down(index, pages);
> + /*
> + * Check for conflict before waiting on a huge allocation.
> + * Conflict might be that a huge page has just been allocated
> + * and added to page cache by a racing thread, or that there
> + * is already at least one small page in the huge extent.
> + * Be careful to retry when appropriate, but not forever!
> + * Elsewhere -EEXIST would be the right code, but not here.
> + */
> if (!xa_find(&mapping->i_pages, &index,
> index + pages - 1, XA_PRESENT))
> break;
> @@ -1735,7 +1745,6 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
> {
> struct address_space *mapping = inode->i_mapping;
> struct shmem_inode_info *info = SHMEM_I(inode);
> - struct vm_area_struct *vma = vmf ? vmf->vma : NULL;
> unsigned long suitable_orders = 0;
> struct folio *folio = NULL;
> long pages;
> @@ -1745,26 +1754,8 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
> orders = 0;
>
> if (orders > 0) {
> - if (vma && vma_is_anon_shmem(vma)) {
> - suitable_orders = shmem_suitable_orders(inode, vmf,
> + suitable_orders = shmem_suitable_orders(inode, vmf,
> mapping, index, orders);
> - } else if (orders & BIT(HPAGE_PMD_ORDER)) {
> - pages = HPAGE_PMD_NR;
> - suitable_orders = BIT(HPAGE_PMD_ORDER);
> - index = round_down(index, HPAGE_PMD_NR);
> -
> - /*
> - * Check for conflict before waiting on a huge allocation.
> - * Conflict might be that a huge page has just been allocated
> - * and added to page cache by a racing thread, or that there
> - * is already at least one small page in the huge extent.
> - * Be careful to retry when appropriate, but not forever!
> - * Elsewhere -EEXIST would be the right code, but not here.
> - */
> - if (xa_find(&mapping->i_pages, &index,
> - index + HPAGE_PMD_NR - 1, XA_PRESENT))
> - return ERR_PTR(-E2BIG);
> - }
>
> order = highest_order(suitable_orders);
> while (suitable_orders) {
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 2/3] mm: shmem: rename shmem_is_huge() to shmem_huge_global_enabled()
2024-07-13 13:24 ` [PATCH v2 2/3] mm: shmem: rename shmem_is_huge() to shmem_huge_global_enabled() Baolin Wang
@ 2024-07-15 13:32 ` Ryan Roberts
2024-07-25 13:08 ` David Hildenbrand
1 sibling, 0 replies; 14+ messages in thread
From: Ryan Roberts @ 2024-07-15 13:32 UTC (permalink / raw)
To: Baolin Wang, akpm, hughd
Cc: willy, david, 21cnbao, ziy, ioworker0, linux-mm, linux-kernel
On 13/07/2024 14:24, Baolin Wang wrote:
> The shmem_is_huge() is now used to check if the top-level huge page is enabled,
> thus rename it to reflect its usage.
>
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
> include/linux/shmem_fs.h | 9 +++++----
> mm/huge_memory.c | 5 +++--
> mm/shmem.c | 15 ++++++++-------
> 3 files changed, 16 insertions(+), 13 deletions(-)
>
> diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
> index 1d06b1e5408a..405ee8d3589a 100644
> --- a/include/linux/shmem_fs.h
> +++ b/include/linux/shmem_fs.h
> @@ -111,14 +111,15 @@ extern void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end);
> int shmem_unuse(unsigned int type);
>
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> -extern bool shmem_is_huge(struct inode *inode, pgoff_t index, bool shmem_huge_force,
> - struct mm_struct *mm, unsigned long vm_flags);
> +extern bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index, bool shmem_huge_force,
> + struct mm_struct *mm, unsigned long vm_flags);
> unsigned long shmem_allowable_huge_orders(struct inode *inode,
> struct vm_area_struct *vma, pgoff_t index,
> bool global_huge);
> #else
> -static __always_inline bool shmem_is_huge(struct inode *inode, pgoff_t index, bool shmem_huge_force,
> - struct mm_struct *mm, unsigned long vm_flags)
> +static __always_inline bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
> + bool shmem_huge_force, struct mm_struct *mm,
> + unsigned long vm_flags)
> {
> return false;
> }
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index f9696c94e211..cc9bad12be75 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -152,8 +152,9 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
> * own flags.
> */
> if (!in_pf && shmem_file(vma->vm_file)) {
> - bool global_huge = shmem_is_huge(file_inode(vma->vm_file), vma->vm_pgoff,
> - !enforce_sysfs, vma->vm_mm, vm_flags);
> + bool global_huge = shmem_huge_global_enabled(file_inode(vma->vm_file),
> + vma->vm_pgoff, !enforce_sysfs,
> + vma->vm_mm, vm_flags);
>
> if (!vma_is_anon_shmem(vma))
> return global_huge ? orders : 0;
> diff --git a/mm/shmem.c b/mm/shmem.c
> index db7e9808830f..1445dcd39b6f 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -548,9 +548,9 @@ static bool shmem_confirm_swap(struct address_space *mapping,
>
> static int shmem_huge __read_mostly = SHMEM_HUGE_NEVER;
>
> -static bool __shmem_is_huge(struct inode *inode, pgoff_t index,
> - bool shmem_huge_force, struct mm_struct *mm,
> - unsigned long vm_flags)
> +static bool __shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
> + bool shmem_huge_force, struct mm_struct *mm,
> + unsigned long vm_flags)
> {
> loff_t i_size;
>
> @@ -581,14 +581,15 @@ static bool __shmem_is_huge(struct inode *inode, pgoff_t index,
> }
> }
>
> -bool shmem_is_huge(struct inode *inode, pgoff_t index,
> +bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
> bool shmem_huge_force, struct mm_struct *mm,
> unsigned long vm_flags)
> {
> if (HPAGE_PMD_ORDER > MAX_PAGECACHE_ORDER)
> return false;
>
> - return __shmem_is_huge(inode, index, shmem_huge_force, mm, vm_flags);
> + return __shmem_huge_global_enabled(inode, index, shmem_huge_force,
> + mm, vm_flags);
> }
>
> #if defined(CONFIG_SYSFS)
> @@ -1156,7 +1157,7 @@ static int shmem_getattr(struct mnt_idmap *idmap,
> STATX_ATTR_NODUMP);
> generic_fillattr(idmap, request_mask, inode, stat);
>
> - if (shmem_is_huge(inode, 0, false, NULL, 0))
> + if (shmem_huge_global_enabled(inode, 0, false, NULL, 0))
> stat->blksize = HPAGE_PMD_SIZE;
>
> if (request_mask & STATX_BTIME) {
> @@ -2153,7 +2154,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index,
> return 0;
> }
>
> - huge = shmem_is_huge(inode, index, false, fault_mm,
> + huge = shmem_huge_global_enabled(inode, index, false, fault_mm,
> vma ? vma->vm_flags : 0);
> /* Find hugepage orders that are allowed for anonymous shmem. */
> if (vma && vma_is_anon_shmem(vma))
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 3/3] mm: shmem: move shmem_huge_global_enabled() into shmem_allowable_huge_orders()
2024-07-13 13:24 ` [PATCH v2 3/3] mm: shmem: move shmem_huge_global_enabled() into shmem_allowable_huge_orders() Baolin Wang
@ 2024-07-15 13:36 ` Ryan Roberts
2024-07-22 2:41 ` Baolin Wang
0 siblings, 1 reply; 14+ messages in thread
From: Ryan Roberts @ 2024-07-15 13:36 UTC (permalink / raw)
To: Baolin Wang, akpm, hughd
Cc: willy, david, 21cnbao, ziy, ioworker0, linux-mm, linux-kernel
On 13/07/2024 14:24, Baolin Wang wrote:
> Move shmem_huge_global_enabled() into the shmem_allowable_huge_orders() function,
> so that shmem_allowable_huge_orders() can also help to find the allowable huge
> orders for tmpfs. Moreover the shmem_huge_global_enabled() can become static.
>
> No functional changes.
>
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
one nit below, but either way:
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
> include/linux/shmem_fs.h | 12 ++----------
> mm/huge_memory.c | 12 +++---------
> mm/shmem.c | 41 ++++++++++++++++++++++++++--------------
> 3 files changed, 32 insertions(+), 33 deletions(-)
>
> diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
> index 405ee8d3589a..1564d7d3ca61 100644
> --- a/include/linux/shmem_fs.h
> +++ b/include/linux/shmem_fs.h
> @@ -111,21 +111,13 @@ extern void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end);
> int shmem_unuse(unsigned int type);
>
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> -extern bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index, bool shmem_huge_force,
> - struct mm_struct *mm, unsigned long vm_flags);
> unsigned long shmem_allowable_huge_orders(struct inode *inode,
> struct vm_area_struct *vma, pgoff_t index,
> - bool global_huge);
> + bool shmem_huge_force);
> #else
> -static __always_inline bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
> - bool shmem_huge_force, struct mm_struct *mm,
> - unsigned long vm_flags)
> -{
> - return false;
> -}
> static inline unsigned long shmem_allowable_huge_orders(struct inode *inode,
> struct vm_area_struct *vma, pgoff_t index,
> - bool global_huge)
> + bool shmem_huge_force)
> {
> return 0;
> }
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index cc9bad12be75..f69980b5b5fc 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -151,16 +151,10 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
> * Must be done before hugepage flags check since shmem has its
> * own flags.
> */
> - if (!in_pf && shmem_file(vma->vm_file)) {
> - bool global_huge = shmem_huge_global_enabled(file_inode(vma->vm_file),
> - vma->vm_pgoff, !enforce_sysfs,
> - vma->vm_mm, vm_flags);
> -
> - if (!vma_is_anon_shmem(vma))
> - return global_huge ? orders : 0;
> + if (!in_pf && shmem_file(vma->vm_file))
> return shmem_allowable_huge_orders(file_inode(vma->vm_file),
> - vma, vma->vm_pgoff, global_huge);
> - }
> + vma, vma->vm_pgoff,
> + !enforce_sysfs);
>
> if (!vma_is_anonymous(vma)) {
> /*
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 1445dcd39b6f..4d274f5a17d9 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -581,7 +581,7 @@ static bool __shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
> }
> }
>
> -bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
> +static bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
> bool shmem_huge_force, struct mm_struct *mm,
> unsigned long vm_flags)
> {
> @@ -772,6 +772,13 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo,
> {
> return 0;
> }
> +
> +static bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
> + bool shmem_huge_force, struct mm_struct *mm,
> + unsigned long vm_flags)
> +{
> + return false;
> +}
> #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>
> /*
> @@ -1625,27 +1632,39 @@ static gfp_t limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp)
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> unsigned long shmem_allowable_huge_orders(struct inode *inode,
> struct vm_area_struct *vma, pgoff_t index,
> - bool global_huge)
> + bool shmem_huge_force)
> {
> unsigned long mask = READ_ONCE(huge_shmem_orders_always);
> unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size);
> - unsigned long vm_flags = vma->vm_flags;
> + unsigned long vm_flags = vma ? vma->vm_flags : 0;
> + struct mm_struct *fault_mm = vma ? vma->vm_mm : NULL;
nit: rather than deriving the fault_mm here, I wonder if its cleaner to just
pass vma to shmem_huge_global_enabled()? shmem_huge_global_enabled() is just
using it as a guard to access vm_flags, which you can just as easily do by
testing the vma for non-NULL. And you can access mm flags with vma->vm_mm->flags
after testing the vma too.
> /*
> * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that
> * are enabled for this vma.
> */
> unsigned long orders = BIT(PMD_ORDER + 1) - 1;
> + bool global_huge;
> loff_t i_size;
> int order;
>
> - if ((vm_flags & VM_NOHUGEPAGE) ||
> - test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))
> + if (vma && ((vm_flags & VM_NOHUGEPAGE) ||
> + test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)))
> return 0;
>
> /* If the hardware/firmware marked hugepage support disabled. */
> if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_UNSUPPORTED))
> return 0;
>
> + global_huge = shmem_huge_global_enabled(inode, index, shmem_huge_force,
> + fault_mm, vm_flags);
> + if (!vma || !vma_is_anon_shmem(vma)) {
> + /*
> + * For tmpfs, we now only support PMD sized THP if huge page
> + * is enabled, otherwise fallback to order 0.
> + */
> + return global_huge ? BIT(HPAGE_PMD_ORDER) : 0;
> + }
> +
> /*
> * Following the 'deny' semantics of the top level, force the huge
> * option off from all mounts.
> @@ -2081,7 +2100,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index,
> struct mm_struct *fault_mm;
> struct folio *folio;
> int error;
> - bool alloced, huge;
> + bool alloced;
> unsigned long orders = 0;
>
> if (WARN_ON_ONCE(!shmem_mapping(inode->i_mapping)))
> @@ -2154,14 +2173,8 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index,
> return 0;
> }
>
> - huge = shmem_huge_global_enabled(inode, index, false, fault_mm,
> - vma ? vma->vm_flags : 0);
> - /* Find hugepage orders that are allowed for anonymous shmem. */
> - if (vma && vma_is_anon_shmem(vma))
> - orders = shmem_allowable_huge_orders(inode, vma, index, huge);
> - else if (huge)
> - orders = BIT(HPAGE_PMD_ORDER);
> -
> + /* Find hugepage orders that are allowed for anonymous shmem and tmpfs. */
> + orders = shmem_allowable_huge_orders(inode, vma, index, false);
> if (orders > 0) {
> gfp_t huge_gfp;
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 3/3] mm: shmem: move shmem_huge_global_enabled() into shmem_allowable_huge_orders()
2024-07-15 13:36 ` Ryan Roberts
@ 2024-07-22 2:41 ` Baolin Wang
2024-07-25 13:09 ` David Hildenbrand
0 siblings, 1 reply; 14+ messages in thread
From: Baolin Wang @ 2024-07-22 2:41 UTC (permalink / raw)
To: Ryan Roberts, akpm, hughd
Cc: willy, david, 21cnbao, ziy, ioworker0, linux-mm, linux-kernel
(Sorry for the late reply due to my vacation.)
On 2024/7/15 21:36, Ryan Roberts wrote:
> On 13/07/2024 14:24, Baolin Wang wrote:
>> Move shmem_huge_global_enabled() into the shmem_allowable_huge_orders() function,
>> so that shmem_allowable_huge_orders() can also help to find the allowable huge
>> orders for tmpfs. Moreover the shmem_huge_global_enabled() can become static.
>>
>> No functional changes.
>>
>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>
> one nit below, but either way:
>
> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
>
>> ---
>> include/linux/shmem_fs.h | 12 ++----------
>> mm/huge_memory.c | 12 +++---------
>> mm/shmem.c | 41 ++++++++++++++++++++++++++--------------
>> 3 files changed, 32 insertions(+), 33 deletions(-)
>>
>> diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
>> index 405ee8d3589a..1564d7d3ca61 100644
>> --- a/include/linux/shmem_fs.h
>> +++ b/include/linux/shmem_fs.h
>> @@ -111,21 +111,13 @@ extern void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end);
>> int shmem_unuse(unsigned int type);
>>
>> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>> -extern bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index, bool shmem_huge_force,
>> - struct mm_struct *mm, unsigned long vm_flags);
>> unsigned long shmem_allowable_huge_orders(struct inode *inode,
>> struct vm_area_struct *vma, pgoff_t index,
>> - bool global_huge);
>> + bool shmem_huge_force);
>> #else
>> -static __always_inline bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
>> - bool shmem_huge_force, struct mm_struct *mm,
>> - unsigned long vm_flags)
>> -{
>> - return false;
>> -}
>> static inline unsigned long shmem_allowable_huge_orders(struct inode *inode,
>> struct vm_area_struct *vma, pgoff_t index,
>> - bool global_huge)
>> + bool shmem_huge_force)
>> {
>> return 0;
>> }
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index cc9bad12be75..f69980b5b5fc 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -151,16 +151,10 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
>> * Must be done before hugepage flags check since shmem has its
>> * own flags.
>> */
>> - if (!in_pf && shmem_file(vma->vm_file)) {
>> - bool global_huge = shmem_huge_global_enabled(file_inode(vma->vm_file),
>> - vma->vm_pgoff, !enforce_sysfs,
>> - vma->vm_mm, vm_flags);
>> -
>> - if (!vma_is_anon_shmem(vma))
>> - return global_huge ? orders : 0;
>> + if (!in_pf && shmem_file(vma->vm_file))
>> return shmem_allowable_huge_orders(file_inode(vma->vm_file),
>> - vma, vma->vm_pgoff, global_huge);
>> - }
>> + vma, vma->vm_pgoff,
>> + !enforce_sysfs);
>>
>> if (!vma_is_anonymous(vma)) {
>> /*
>> diff --git a/mm/shmem.c b/mm/shmem.c
>> index 1445dcd39b6f..4d274f5a17d9 100644
>> --- a/mm/shmem.c
>> +++ b/mm/shmem.c
>> @@ -581,7 +581,7 @@ static bool __shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
>> }
>> }
>>
>> -bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
>> +static bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
>> bool shmem_huge_force, struct mm_struct *mm,
>> unsigned long vm_flags)
>> {
>> @@ -772,6 +772,13 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo,
>> {
>> return 0;
>> }
>> +
>> +static bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
>> + bool shmem_huge_force, struct mm_struct *mm,
>> + unsigned long vm_flags)
>> +{
>> + return false;
>> +}
>> #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>>
>> /*
>> @@ -1625,27 +1632,39 @@ static gfp_t limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp)
>> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>> unsigned long shmem_allowable_huge_orders(struct inode *inode,
>> struct vm_area_struct *vma, pgoff_t index,
>> - bool global_huge)
>> + bool shmem_huge_force)
>> {
>> unsigned long mask = READ_ONCE(huge_shmem_orders_always);
>> unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size);
>> - unsigned long vm_flags = vma->vm_flags;
>> + unsigned long vm_flags = vma ? vma->vm_flags : 0;
>> + struct mm_struct *fault_mm = vma ? vma->vm_mm : NULL;
>
> nit: rather than deriving the fault_mm here, I wonder if its cleaner to just
> pass vma to shmem_huge_global_enabled()? shmem_huge_global_enabled() is just
> using it as a guard to access vm_flags, which you can just as easily do by
> testing the vma for non-NULL. And you can access mm flags with vma->vm_mm->flags
> after testing the vma too.
Make sense to me, and will do in next version.
Thanks for reviewing.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 0/3] Some cleanups for shmem
2024-07-13 13:24 [PATCH v2 0/3] Some cleanups for shmem Baolin Wang
` (2 preceding siblings ...)
2024-07-13 13:24 ` [PATCH v2 3/3] mm: shmem: move shmem_huge_global_enabled() into shmem_allowable_huge_orders() Baolin Wang
@ 2024-07-24 19:14 ` Andrew Morton
2024-07-24 19:15 ` Andrew Morton
3 siblings, 1 reply; 14+ messages in thread
From: Andrew Morton @ 2024-07-24 19:14 UTC (permalink / raw)
To: Baolin Wang
Cc: hughd, willy, david, 21cnbao, ryan.roberts, ziy, ioworker0,
linux-mm, linux-kernel
On Sat, 13 Jul 2024 21:24:19 +0800 Baolin Wang <baolin.wang@linux.alibaba.com> wrote:
> Changes from v1:
> - Add a dummy function in case CONFIG_TRANSPARENT_HUGEPAGE is not
> enabled, which fixes a building error reported by kernel test robot.
The only difference I'm seeing from the v1 series is the below update
to [3/3]:
--- a/mm/shmem.c~mm-shmem-move-shmem_huge_global_enabled-into-shmem_allowable_huge_orders-v2
+++ a/mm/shmem.c
@@ -549,10 +549,9 @@ static bool shmem_confirm_swap(struct ad
static int shmem_huge __read_mostly = SHMEM_HUGE_NEVER;
static bool __shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
- bool shmem_huge_force, struct vm_area_struct *vma,
+ bool shmem_huge_force, struct mm_struct *mm,
unsigned long vm_flags)
{
- struct mm_struct *mm = vma ? vma->vm_mm : NULL;
loff_t i_size;
if (!S_ISREG(inode->i_mode))
@@ -583,14 +582,14 @@ static bool __shmem_huge_global_enabled(
}
static bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
- bool shmem_huge_force, struct vm_area_struct *vma,
+ bool shmem_huge_force, struct mm_struct *mm,
unsigned long vm_flags)
{
if (HPAGE_PMD_ORDER > MAX_PAGECACHE_ORDER)
return false;
return __shmem_huge_global_enabled(inode, index, shmem_huge_force,
- vma, vm_flags);
+ mm, vm_flags);
}
#if defined(CONFIG_SYSFS)
@@ -775,7 +774,7 @@ static unsigned long shmem_unused_huge_s
}
static bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
- bool shmem_huge_force, struct vm_area_struct *vma,
+ bool shmem_huge_force, struct mm_struct *mm,
unsigned long vm_flags)
{
return false;
@@ -1638,6 +1637,7 @@ unsigned long shmem_allowable_huge_order
unsigned long mask = READ_ONCE(huge_shmem_orders_always);
unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size);
unsigned long vm_flags = vma ? vma->vm_flags : 0;
+ struct mm_struct *fault_mm = vma ? vma->vm_mm : NULL;
/*
* Check all the (large) orders below HPAGE_PMD_ORDER + 1 that
* are enabled for this vma.
@@ -1656,7 +1656,7 @@ unsigned long shmem_allowable_huge_order
return 0;
global_huge = shmem_huge_global_enabled(inode, index, shmem_huge_force,
- vma, vm_flags);
+ fault_mm, vm_flags);
if (!vma || !vma_is_anon_shmem(vma)) {
/*
* For tmpfs, we now only support PMD sized THP if huge page
_
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 0/3] Some cleanups for shmem
2024-07-24 19:14 ` [PATCH v2 0/3] Some cleanups for shmem Andrew Morton
@ 2024-07-24 19:15 ` Andrew Morton
0 siblings, 0 replies; 14+ messages in thread
From: Andrew Morton @ 2024-07-24 19:15 UTC (permalink / raw)
To: Baolin Wang, hughd, willy, david, 21cnbao, ryan.roberts, ziy,
ioworker0, linux-mm, linux-kernel
On Wed, 24 Jul 2024 12:14:07 -0700 Andrew Morton <akpm@linux-foundation.org> wrote:
> The only difference I'm seeing from the v1 series is the below update
> to [3/3]:
oop. sorry, never mind.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 1/3] mm: shmem: simplify the suitable huge orders validation for tmpfs
2024-07-13 13:24 ` [PATCH v2 1/3] mm: shmem: simplify the suitable huge orders validation for tmpfs Baolin Wang
2024-07-15 13:30 ` Ryan Roberts
@ 2024-07-25 13:07 ` David Hildenbrand
1 sibling, 0 replies; 14+ messages in thread
From: David Hildenbrand @ 2024-07-25 13:07 UTC (permalink / raw)
To: Baolin Wang, akpm, hughd
Cc: willy, 21cnbao, ryan.roberts, ziy, ioworker0, linux-mm,
linux-kernel
On 13.07.24 15:24, Baolin Wang wrote:
> Move the suitable huge orders validation into shmem_suitable_orders() for
> tmpfs, which can reuse some code to simplify the logic.
>
> In addition, we don't have special handling for the error code -E2BIG when
> checking for conflicts with PMD sized THP in the pagecache for tmpfs, instead,
> it will just fallback to order-0 allocations like this patch does, so this
> simplification will not add functional changes.
>
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 2/3] mm: shmem: rename shmem_is_huge() to shmem_huge_global_enabled()
2024-07-13 13:24 ` [PATCH v2 2/3] mm: shmem: rename shmem_is_huge() to shmem_huge_global_enabled() Baolin Wang
2024-07-15 13:32 ` Ryan Roberts
@ 2024-07-25 13:08 ` David Hildenbrand
1 sibling, 0 replies; 14+ messages in thread
From: David Hildenbrand @ 2024-07-25 13:08 UTC (permalink / raw)
To: Baolin Wang, akpm, hughd
Cc: willy, 21cnbao, ryan.roberts, ziy, ioworker0, linux-mm,
linux-kernel
On 13.07.24 15:24, Baolin Wang wrote:
> The shmem_is_huge() is now used to check if the top-level huge page is enabled,
> thus rename it to reflect its usage.
>
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 3/3] mm: shmem: move shmem_huge_global_enabled() into shmem_allowable_huge_orders()
2024-07-22 2:41 ` Baolin Wang
@ 2024-07-25 13:09 ` David Hildenbrand
2024-07-26 1:09 ` Baolin Wang
0 siblings, 1 reply; 14+ messages in thread
From: David Hildenbrand @ 2024-07-25 13:09 UTC (permalink / raw)
To: Baolin Wang, Ryan Roberts, akpm, hughd
Cc: willy, 21cnbao, ziy, ioworker0, linux-mm, linux-kernel
>>> /*
>>> @@ -1625,27 +1632,39 @@ static gfp_t limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp)
>>> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>> unsigned long shmem_allowable_huge_orders(struct inode *inode,
>>> struct vm_area_struct *vma, pgoff_t index,
>>> - bool global_huge)
>>> + bool shmem_huge_force)
>>> {
>>> unsigned long mask = READ_ONCE(huge_shmem_orders_always);
>>> unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size);
>>> - unsigned long vm_flags = vma->vm_flags;
>>> + unsigned long vm_flags = vma ? vma->vm_flags : 0;
>>> + struct mm_struct *fault_mm = vma ? vma->vm_mm : NULL;
>>
>> nit: rather than deriving the fault_mm here, I wonder if its cleaner to just
>> pass vma to shmem_huge_global_enabled()? shmem_huge_global_enabled() is just
>> using it as a guard to access vm_flags, which you can just as easily do by
>> testing the vma for non-NULL. And you can access mm flags with vma->vm_mm->flags
>> after testing the vma too.
>
> Make sense to me, and will do in next version.
Feel free to add my
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 3/3] mm: shmem: move shmem_huge_global_enabled() into shmem_allowable_huge_orders()
2024-07-25 13:09 ` David Hildenbrand
@ 2024-07-26 1:09 ` Baolin Wang
0 siblings, 0 replies; 14+ messages in thread
From: Baolin Wang @ 2024-07-26 1:09 UTC (permalink / raw)
To: David Hildenbrand, Ryan Roberts, akpm, hughd
Cc: willy, 21cnbao, ziy, ioworker0, linux-mm, linux-kernel
On 2024/7/25 21:09, David Hildenbrand wrote:
>>>> /*
>>>> @@ -1625,27 +1632,39 @@ static gfp_t limit_gfp_mask(gfp_t huge_gfp,
>>>> gfp_t limit_gfp)
>>>> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>>> unsigned long shmem_allowable_huge_orders(struct inode *inode,
>>>> struct vm_area_struct *vma, pgoff_t index,
>>>> - bool global_huge)
>>>> + bool shmem_huge_force)
>>>> {
>>>> unsigned long mask = READ_ONCE(huge_shmem_orders_always);
>>>> unsigned long within_size_orders =
>>>> READ_ONCE(huge_shmem_orders_within_size);
>>>> - unsigned long vm_flags = vma->vm_flags;
>>>> + unsigned long vm_flags = vma ? vma->vm_flags : 0;
>>>> + struct mm_struct *fault_mm = vma ? vma->vm_mm : NULL;
>>>
>>> nit: rather than deriving the fault_mm here, I wonder if its cleaner
>>> to just
>>> pass vma to shmem_huge_global_enabled()? shmem_huge_global_enabled()
>>> is just
>>> using it as a guard to access vm_flags, which you can just as easily
>>> do by
>>> testing the vma for non-NULL. And you can access mm flags with
>>> vma->vm_mm->flags
>>> after testing the vma too.
>>
>> Make sense to me, and will do in next version.
>
> Feel free to add my
>
> Acked-by: David Hildenbrand <david@redhat.com>
Thanks David.
Andrew has already queued my v3 patchset into the mm-unstable branch.
Andrew, please help to add David's acked tag for v3 series. Thanks.
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2024-07-26 1:09 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-07-13 13:24 [PATCH v2 0/3] Some cleanups for shmem Baolin Wang
2024-07-13 13:24 ` [PATCH v2 1/3] mm: shmem: simplify the suitable huge orders validation for tmpfs Baolin Wang
2024-07-15 13:30 ` Ryan Roberts
2024-07-25 13:07 ` David Hildenbrand
2024-07-13 13:24 ` [PATCH v2 2/3] mm: shmem: rename shmem_is_huge() to shmem_huge_global_enabled() Baolin Wang
2024-07-15 13:32 ` Ryan Roberts
2024-07-25 13:08 ` David Hildenbrand
2024-07-13 13:24 ` [PATCH v2 3/3] mm: shmem: move shmem_huge_global_enabled() into shmem_allowable_huge_orders() Baolin Wang
2024-07-15 13:36 ` Ryan Roberts
2024-07-22 2:41 ` Baolin Wang
2024-07-25 13:09 ` David Hildenbrand
2024-07-26 1:09 ` Baolin Wang
2024-07-24 19:14 ` [PATCH v2 0/3] Some cleanups for shmem Andrew Morton
2024-07-24 19:15 ` Andrew Morton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).