linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] mm: ksm: prevent KSM from entirely breaking VMA merging
@ 2025-05-21 18:20 Lorenzo Stoakes
  2025-05-21 18:20 ` [PATCH v2 1/4] mm: ksm: have KSM VMA checks not require a VMA pointer Lorenzo Stoakes
                   ` (4 more replies)
  0 siblings, 5 replies; 20+ messages in thread
From: Lorenzo Stoakes @ 2025-05-21 18:20 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Christian Brauner, Jan Kara, Liam R . Howlett,
	Vlastimil Babka, Jann Horn, Pedro Falcato, David Hildenbrand,
	Xu Xin, Chengming Zhou, linux-mm, linux-kernel, linux-fsdevel,
	Stefan Roesch

When KSM-by-default is established using prctl(PR_SET_MEMORY_MERGE), this
defaults all newly mapped VMAs to having VM_MERGEABLE set, and thus makes
them available to KSM for samepage merging. It also sets VM_MERGEABLE in
all existing VMAs.

However this causes an issue upon mapping of new VMAs - the initial flags
will never have VM_MERGEABLE set when attempting a merge with adjacent VMAs
(this is set later in the mmap() logic), and adjacent VMAs will ALWAYS have
VM_MERGEABLE set.

This renders literally all VMAs in the virtual address space unmergeable.

To avoid this, this series performs the check for PR_SET_MEMORY_MERGE far
earlier in the mmap() logic, prior to the merge being attempted.

However we run into a complexity with the depreciated .mmap() callback - if
a driver hooks this, it might change flags thus adjusting KSM merge
eligibility.

This isn't a problem for brk(), where the VMA must be anonymous. However
for mmap() we are conservative - if the VMA is anonymous then we can always
proceed, however if not, we permit only shmem mappings and drivers which
implement .mmap_prepare().

If we can't be sure of the driver changing things, then we maintain the
same behaviour of performing the KSM check later in the mmap() logic (and
thus losing VMA mergeability).

Since the .mmap_prepare() hook is invoked prior to the KSM check, this
means we can always perform the KSM check early if it is present. Over time
as drivers are converted, we can do away with the later check altogether.

A great many use-cases for this logic will use anonymous or shmem memory at
any rate, as KSM is not supported for the page cache, and the driver
outliers in question are MAP_PRIVATE mappings of these files.

So this change should already cover the majority of actual KSM use-cases.

v2:
* Removed unnecessary ret local variable in ksm_vma_flags() as per David.
* added Stefan Roesch in cc and added Fixes tag as per Andrew, David.
* Propagated tags (thanks everyone!)
* Removed unnecessary !CONFIG_KSM ksm_add_vma() stub from
  include/linux/ksm.h.
* Added missing !CONFIG_KSM ksm_vma_flags() stub in
  include/linux/ksm.h.
* After discussion with David, I've decided to defer removing the
  VM_SPECIAL case for KSM, we can address this in a follow-up series.
* Expanded 3/4 commit message to reference KSM eligibility vs. merging and
  referenced future plans to permit KSM for VM_SPECIAL VMAs.

v1:
https://lore.kernel.org/all/cover.1747431920.git.lorenzo.stoakes@oracle.com/

Lorenzo Stoakes (4):
  mm: ksm: have KSM VMA checks not require a VMA pointer
  mm: ksm: refer to special VMAs via VM_SPECIAL in ksm_compatible()
  mm: prevent KSM from completely breaking VMA merging
  tools/testing/selftests: add VMA merge tests for KSM merge

 include/linux/fs.h                 |  7 ++-
 include/linux/ksm.h                |  8 +--
 mm/ksm.c                           | 49 ++++++++++++-------
 mm/vma.c                           | 49 ++++++++++++++++++-
 tools/testing/selftests/mm/merge.c | 78 ++++++++++++++++++++++++++++++
 5 files changed, 167 insertions(+), 24 deletions(-)

--
2.49.0

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v2 1/4] mm: ksm: have KSM VMA checks not require a VMA pointer
  2025-05-21 18:20 [PATCH 0/4] mm: ksm: prevent KSM from entirely breaking VMA merging Lorenzo Stoakes
@ 2025-05-21 18:20 ` Lorenzo Stoakes
  2025-05-26 14:31   ` Liam R. Howlett
                     ` (2 more replies)
  2025-05-21 18:20 ` [PATCH v2 2/4] mm: ksm: refer to special VMAs via VM_SPECIAL in ksm_compatible() Lorenzo Stoakes
                   ` (3 subsequent siblings)
  4 siblings, 3 replies; 20+ messages in thread
From: Lorenzo Stoakes @ 2025-05-21 18:20 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Christian Brauner, Jan Kara, Liam R . Howlett,
	Vlastimil Babka, Jann Horn, Pedro Falcato, David Hildenbrand,
	Xu Xin, Chengming Zhou, linux-mm, linux-kernel, linux-fsdevel,
	Stefan Roesch

In subsequent commits we are going to determine KSM eligibility prior to a
VMA being constructed, at which point we will of course not yet have access
to a VMA pointer.

It is trivial to boil down the check logic to be parameterised on
mm_struct, file and VMA flags, so do so.

As a part of this change, additionally expose and use file_is_dax() to
determine whether a file is being mapped under a DAX inode.

Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>
---
 include/linux/fs.h |  7 ++++++-
 mm/ksm.c           | 32 ++++++++++++++++++++------------
 2 files changed, 26 insertions(+), 13 deletions(-)

diff --git a/include/linux/fs.h b/include/linux/fs.h
index 09c8495dacdb..e1397e2b55ea 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -3691,9 +3691,14 @@ void setattr_copy(struct mnt_idmap *, struct inode *inode,
 
 extern int file_update_time(struct file *file);
 
+static inline bool file_is_dax(const struct file *file)
+{
+	return file && IS_DAX(file->f_mapping->host);
+}
+
 static inline bool vma_is_dax(const struct vm_area_struct *vma)
 {
-	return vma->vm_file && IS_DAX(vma->vm_file->f_mapping->host);
+	return file_is_dax(vma->vm_file);
 }
 
 static inline bool vma_is_fsdax(struct vm_area_struct *vma)
diff --git a/mm/ksm.c b/mm/ksm.c
index 8583fb91ef13..08d486f188ff 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -677,28 +677,33 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr, bool lock_v
 	return (ret & VM_FAULT_OOM) ? -ENOMEM : 0;
 }
 
-static bool vma_ksm_compatible(struct vm_area_struct *vma)
+static bool ksm_compatible(const struct file *file, vm_flags_t vm_flags)
 {
-	if (vma->vm_flags & (VM_SHARED  | VM_MAYSHARE   | VM_PFNMAP  |
-			     VM_IO      | VM_DONTEXPAND | VM_HUGETLB |
-			     VM_MIXEDMAP| VM_DROPPABLE))
+	if (vm_flags & (VM_SHARED   | VM_MAYSHARE   | VM_PFNMAP  |
+			VM_IO       | VM_DONTEXPAND | VM_HUGETLB |
+			VM_MIXEDMAP | VM_DROPPABLE))
 		return false;		/* just ignore the advice */
 
-	if (vma_is_dax(vma))
+	if (file_is_dax(file))
 		return false;
 
 #ifdef VM_SAO
-	if (vma->vm_flags & VM_SAO)
+	if (vm_flags & VM_SAO)
 		return false;
 #endif
 #ifdef VM_SPARC_ADI
-	if (vma->vm_flags & VM_SPARC_ADI)
+	if (vm_flags & VM_SPARC_ADI)
 		return false;
 #endif
 
 	return true;
 }
 
+static bool vma_ksm_compatible(struct vm_area_struct *vma)
+{
+	return ksm_compatible(vma->vm_file, vma->vm_flags);
+}
+
 static struct vm_area_struct *find_mergeable_vma(struct mm_struct *mm,
 		unsigned long addr)
 {
@@ -2696,14 +2701,17 @@ static int ksm_scan_thread(void *nothing)
 	return 0;
 }
 
-static void __ksm_add_vma(struct vm_area_struct *vma)
+static bool __ksm_should_add_vma(const struct file *file, vm_flags_t vm_flags)
 {
-	unsigned long vm_flags = vma->vm_flags;
-
 	if (vm_flags & VM_MERGEABLE)
-		return;
+		return false;
+
+	return ksm_compatible(file, vm_flags);
+}
 
-	if (vma_ksm_compatible(vma))
+static void __ksm_add_vma(struct vm_area_struct *vma)
+{
+	if (__ksm_should_add_vma(vma->vm_file, vma->vm_flags))
 		vm_flags_set(vma, VM_MERGEABLE);
 }
 
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 2/4] mm: ksm: refer to special VMAs via VM_SPECIAL in ksm_compatible()
  2025-05-21 18:20 [PATCH 0/4] mm: ksm: prevent KSM from entirely breaking VMA merging Lorenzo Stoakes
  2025-05-21 18:20 ` [PATCH v2 1/4] mm: ksm: have KSM VMA checks not require a VMA pointer Lorenzo Stoakes
@ 2025-05-21 18:20 ` Lorenzo Stoakes
  2025-05-26 14:31   ` Liam R. Howlett
                     ` (2 more replies)
  2025-05-21 18:20 ` [PATCH v2 3/4] mm: prevent KSM from completely breaking VMA merging Lorenzo Stoakes
                   ` (2 subsequent siblings)
  4 siblings, 3 replies; 20+ messages in thread
From: Lorenzo Stoakes @ 2025-05-21 18:20 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Christian Brauner, Jan Kara, Liam R . Howlett,
	Vlastimil Babka, Jann Horn, Pedro Falcato, David Hildenbrand,
	Xu Xin, Chengming Zhou, linux-mm, linux-kernel, linux-fsdevel,
	Stefan Roesch

There's no need to spell out all the special cases, also doing it this way
makes it absolutely clear that we preclude unmergeable VMAs in general, and
puts the other excluded flags in stark and clear contrast.

Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>
---
 mm/ksm.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 08d486f188ff..d0c763abd499 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -679,9 +679,8 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr, bool lock_v
 
 static bool ksm_compatible(const struct file *file, vm_flags_t vm_flags)
 {
-	if (vm_flags & (VM_SHARED   | VM_MAYSHARE   | VM_PFNMAP  |
-			VM_IO       | VM_DONTEXPAND | VM_HUGETLB |
-			VM_MIXEDMAP | VM_DROPPABLE))
+	if (vm_flags & (VM_SHARED  | VM_MAYSHARE | VM_SPECIAL |
+			VM_HUGETLB | VM_DROPPABLE))
 		return false;		/* just ignore the advice */
 
 	if (file_is_dax(file))
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 3/4] mm: prevent KSM from completely breaking VMA merging
  2025-05-21 18:20 [PATCH 0/4] mm: ksm: prevent KSM from entirely breaking VMA merging Lorenzo Stoakes
  2025-05-21 18:20 ` [PATCH v2 1/4] mm: ksm: have KSM VMA checks not require a VMA pointer Lorenzo Stoakes
  2025-05-21 18:20 ` [PATCH v2 2/4] mm: ksm: refer to special VMAs via VM_SPECIAL in ksm_compatible() Lorenzo Stoakes
@ 2025-05-21 18:20 ` Lorenzo Stoakes
  2025-05-26 13:33   ` David Hildenbrand
                     ` (3 more replies)
  2025-05-21 18:20 ` [PATCH v2 4/4] tools/testing/selftests: add VMA merge tests for KSM merge Lorenzo Stoakes
  2025-05-21 18:23 ` [PATCH 0/4] mm: ksm: prevent KSM from entirely breaking VMA merging Lorenzo Stoakes
  4 siblings, 4 replies; 20+ messages in thread
From: Lorenzo Stoakes @ 2025-05-21 18:20 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Christian Brauner, Jan Kara, Liam R . Howlett,
	Vlastimil Babka, Jann Horn, Pedro Falcato, David Hildenbrand,
	Xu Xin, Chengming Zhou, linux-mm, linux-kernel, linux-fsdevel,
	Stefan Roesch

If a user wishes to enable KSM mergeability for an entire process and all
fork/exec'd processes that come after it, they use the prctl()
PR_SET_MEMORY_MERGE operation.

This defaults all newly mapped VMAs to have the VM_MERGEABLE VMA flag set
(in order to indicate they are KSM mergeable), as well as setting this flag
for all existing VMAs.

However it also entirely and completely breaks VMA merging for the process
and all forked (and fork/exec'd) processes.

This is because when a new mapping is proposed, the flags specified will
never have VM_MERGEABLE set. However all adjacent VMAs will already have
VM_MERGEABLE set, rendering VMAs unmergeable by default.

To work around this, we try to set the VM_MERGEABLE flag prior to
attempting a merge. In the case of brk() this can always be done.

However on mmap() things are more complicated - while KSM is not supported
for file-backed mappings, it is supported for MAP_PRIVATE file-backed
mappings.

And these mappings may have deprecated .mmap() callbacks specified which
could, in theory, adjust flags and thus KSM eligiblity.

This is unlikely to cause an issue on merge, as any adjacent file-backed
mappings would already have the same post-.mmap() callback attributes, and
thus would naturally not be merged.

But for the purposes of establishing a VMA as KSM-eligible (as well as
initially scanning the VMA), this is potentially very problematic.

So we check to determine whether this at all possible. If not, we set
VM_MERGEABLE prior to the merge attempt on mmap(), otherwise we retain the
previous behaviour.

When .mmap_prepare() is more widely used, we can remove this precaution.

While this doesn't quite cover all cases, it covers a great many (all
anonymous memory, for instance), meaning we should already see a
significant improvement in VMA mergeability.

Since, when it comes to file-backed mappings (other than shmem) we are
really only interested in MAP_PRIVATE mappings which have an available anon
page by default. Therefore, the VM_SPECIAL restriction makes less sense for
KSM.

In a future series we therefore intend to remove this limitation, which
ought to simplify this implementation. However it makes sense to defer
doing so until a later stage so we can first address this mergeability
issue.

Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Fixes: d7597f59d1d3 ("mm: add new api to enable ksm per process") # please no backport!
Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>
---
 include/linux/ksm.h |  8 +++++---
 mm/ksm.c            | 18 +++++++++++------
 mm/vma.c            | 49 +++++++++++++++++++++++++++++++++++++++++++--
 3 files changed, 64 insertions(+), 11 deletions(-)

diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index d73095b5cd96..51787f0b0208 100644
--- a/include/linux/ksm.h
+++ b/include/linux/ksm.h
@@ -17,8 +17,8 @@
 #ifdef CONFIG_KSM
 int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
 		unsigned long end, int advice, unsigned long *vm_flags);
-
-void ksm_add_vma(struct vm_area_struct *vma);
+vm_flags_t ksm_vma_flags(const struct mm_struct *mm, const struct file *file,
+			 vm_flags_t vm_flags);
 int ksm_enable_merge_any(struct mm_struct *mm);
 int ksm_disable_merge_any(struct mm_struct *mm);
 int ksm_disable(struct mm_struct *mm);
@@ -97,8 +97,10 @@ bool ksm_process_mergeable(struct mm_struct *mm);

 #else  /* !CONFIG_KSM */

-static inline void ksm_add_vma(struct vm_area_struct *vma)
+static inline vm_flags_t ksm_vma_flags(const struct mm_struct *mm,
+		const struct file *file, vm_flags_t vm_flags)
 {
+	return vm_flags;
 }

 static inline int ksm_disable(struct mm_struct *mm)
diff --git a/mm/ksm.c b/mm/ksm.c
index d0c763abd499..18b3690bb69a 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -2731,16 +2731,22 @@ static int __ksm_del_vma(struct vm_area_struct *vma)
 	return 0;
 }
 /**
- * ksm_add_vma - Mark vma as mergeable if compatible
+ * ksm_vma_flags - Update VMA flags to mark as mergeable if compatible
  *
- * @vma:  Pointer to vma
+ * @mm:       Proposed VMA's mm_struct
+ * @file:     Proposed VMA's file-backed mapping, if any.
+ * @vm_flags: Proposed VMA"s flags.
+ *
+ * Returns: @vm_flags possibly updated to mark mergeable.
  */
-void ksm_add_vma(struct vm_area_struct *vma)
+vm_flags_t ksm_vma_flags(const struct mm_struct *mm, const struct file *file,
+			 vm_flags_t vm_flags)
 {
-	struct mm_struct *mm = vma->vm_mm;
+	if (test_bit(MMF_VM_MERGE_ANY, &mm->flags) &&
+	    __ksm_should_add_vma(file, vm_flags))
+		vm_flags |= VM_MERGEABLE;

-	if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
-		__ksm_add_vma(vma);
+	return vm_flags;
 }

 static void ksm_add_vmas(struct mm_struct *mm)
diff --git a/mm/vma.c b/mm/vma.c
index 3ff6cfbe3338..5bebe55ea737 100644
--- a/mm/vma.c
+++ b/mm/vma.c
@@ -2482,7 +2482,6 @@ static int __mmap_new_vma(struct mmap_state *map, struct vm_area_struct **vmap)
 	 */
 	if (!vma_is_anonymous(vma))
 		khugepaged_enter_vma(vma, map->flags);
-	ksm_add_vma(vma);
 	*vmap = vma;
 	return 0;

@@ -2585,6 +2584,45 @@ static void set_vma_user_defined_fields(struct vm_area_struct *vma,
 	vma->vm_private_data = map->vm_private_data;
 }

+static void update_ksm_flags(struct mmap_state *map)
+{
+	map->flags = ksm_vma_flags(map->mm, map->file, map->flags);
+}
+
+/*
+ * Are we guaranteed no driver can change state such as to preclude KSM merging?
+ * If so, let's set the KSM mergeable flag early so we don't break VMA merging.
+ *
+ * This is applicable when PR_SET_MEMORY_MERGE has been set on the mm_struct via
+ * prctl() causing newly mapped VMAs to have the KSM mergeable VMA flag set.
+ *
+ * If this is not the case, then we set the flag after considering mergeability,
+ * which will prevent mergeability as, when PR_SET_MEMORY_MERGE is set, a new
+ * VMA will not have the KSM mergeability VMA flag set, but all other VMAs will,
+ * preventing any merge.
+ */
+static bool can_set_ksm_flags_early(struct mmap_state *map)
+{
+	struct file *file = map->file;
+
+	/* Anonymous mappings have no driver which can change them. */
+	if (!file)
+		return true;
+
+	/* shmem is safe. */
+	if (shmem_file(file))
+		return true;
+
+	/*
+	 * If .mmap_prepare() is specified, then the driver will have already
+	 * manipulated state prior to updating KSM flags.
+	 */
+	if (file->f_op->mmap_prepare)
+		return true;
+
+	return false;
+}
+
 static unsigned long __mmap_region(struct file *file, unsigned long addr,
 		unsigned long len, vm_flags_t vm_flags, unsigned long pgoff,
 		struct list_head *uf)
@@ -2595,6 +2633,7 @@ static unsigned long __mmap_region(struct file *file, unsigned long addr,
 	bool have_mmap_prepare = file && file->f_op->mmap_prepare;
 	VMA_ITERATOR(vmi, mm, addr);
 	MMAP_STATE(map, mm, &vmi, addr, len, pgoff, vm_flags, file);
+	bool check_ksm_early = can_set_ksm_flags_early(&map);

 	error = __mmap_prepare(&map, uf);
 	if (!error && have_mmap_prepare)
@@ -2602,6 +2641,9 @@ static unsigned long __mmap_region(struct file *file, unsigned long addr,
 	if (error)
 		goto abort_munmap;

+	if (check_ksm_early)
+		update_ksm_flags(&map);
+
 	/* Attempt to merge with adjacent VMAs... */
 	if (map.prev || map.next) {
 		VMG_MMAP_STATE(vmg, &map, /* vma = */ NULL);
@@ -2611,6 +2653,9 @@ static unsigned long __mmap_region(struct file *file, unsigned long addr,

 	/* ...but if we can't, allocate a new VMA. */
 	if (!vma) {
+		if (!check_ksm_early)
+			update_ksm_flags(&map);
+
 		error = __mmap_new_vma(&map, &vma);
 		if (error)
 			goto unacct_error;
@@ -2713,6 +2758,7 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,
 	 * Note: This happens *after* clearing old mappings in some code paths.
 	 */
 	flags |= VM_DATA_DEFAULT_FLAGS | VM_ACCOUNT | mm->def_flags;
+	flags = ksm_vma_flags(mm, NULL, flags);
 	if (!may_expand_vm(mm, flags, len >> PAGE_SHIFT))
 		return -ENOMEM;

@@ -2756,7 +2802,6 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,

 	mm->map_count++;
 	validate_mm(mm);
-	ksm_add_vma(vma);
 out:
 	perf_event_mmap(vma);
 	mm->total_vm += len >> PAGE_SHIFT;
--
2.49.0

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 4/4] tools/testing/selftests: add VMA merge tests for KSM merge
  2025-05-21 18:20 [PATCH 0/4] mm: ksm: prevent KSM from entirely breaking VMA merging Lorenzo Stoakes
                   ` (2 preceding siblings ...)
  2025-05-21 18:20 ` [PATCH v2 3/4] mm: prevent KSM from completely breaking VMA merging Lorenzo Stoakes
@ 2025-05-21 18:20 ` Lorenzo Stoakes
  2025-05-26 14:34   ` Liam R. Howlett
  2025-05-21 18:23 ` [PATCH 0/4] mm: ksm: prevent KSM from entirely breaking VMA merging Lorenzo Stoakes
  4 siblings, 1 reply; 20+ messages in thread
From: Lorenzo Stoakes @ 2025-05-21 18:20 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Christian Brauner, Jan Kara, Liam R . Howlett,
	Vlastimil Babka, Jann Horn, Pedro Falcato, David Hildenbrand,
	Xu Xin, Chengming Zhou, linux-mm, linux-kernel, linux-fsdevel,
	Stefan Roesch

Add test to assert that we have now allowed merging of VMAs when KSM
merging-by-default has been set by prctl(PR_SET_MEMORY_MERGE, ...).

We simply perform a trivial mapping of adjacent VMAs expecting a merge,
however prior to recent changes implementing this mode earlier than before,
these merges would not have succeeded.

Assert that we have fixed this!

Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>
Tested-by: Chengming Zhou <chengming.zhou@linux.dev>
---
 tools/testing/selftests/mm/merge.c | 78 ++++++++++++++++++++++++++++++
 1 file changed, 78 insertions(+)

diff --git a/tools/testing/selftests/mm/merge.c b/tools/testing/selftests/mm/merge.c
index c76646cdf6e6..2380a5a6a529 100644
--- a/tools/testing/selftests/mm/merge.c
+++ b/tools/testing/selftests/mm/merge.c
@@ -2,10 +2,12 @@
 
 #define _GNU_SOURCE
 #include "../kselftest_harness.h"
+#include <linux/prctl.h>
 #include <stdio.h>
 #include <stdlib.h>
 #include <unistd.h>
 #include <sys/mman.h>
+#include <sys/prctl.h>
 #include <sys/wait.h>
 #include "vm_util.h"
 
@@ -31,6 +33,11 @@ FIXTURE_TEARDOWN(merge)
 {
 	ASSERT_EQ(munmap(self->carveout, 12 * self->page_size), 0);
 	ASSERT_EQ(close_procmap(&self->procmap), 0);
+	/*
+	 * Clear unconditionally, as some tests set this. It is no issue if this
+	 * fails (KSM may be disabled for instance).
+	 */
+	prctl(PR_SET_MEMORY_MERGE, 0, 0, 0, 0);
 }
 
 TEST_F(merge, mprotect_unfaulted_left)
@@ -452,4 +459,75 @@ TEST_F(merge, forked_source_vma)
 	ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr2 + 5 * page_size);
 }
 
+TEST_F(merge, ksm_merge)
+{
+	unsigned int page_size = self->page_size;
+	char *carveout = self->carveout;
+	struct procmap_fd *procmap = &self->procmap;
+	char *ptr, *ptr2;
+	int err;
+
+	/*
+	 * Map two R/W immediately adjacent to one another, they should
+	 * trivially merge:
+	 *
+	 * |-----------|-----------|
+	 * |    R/W    |    R/W    |
+	 * |-----------|-----------|
+	 *      ptr         ptr2
+	 */
+
+	ptr = mmap(&carveout[page_size], page_size, PROT_READ | PROT_WRITE,
+		   MAP_ANON | MAP_PRIVATE | MAP_FIXED, -1, 0);
+	ASSERT_NE(ptr, MAP_FAILED);
+	ptr2 = mmap(&carveout[2 * page_size], page_size,
+		    PROT_READ | PROT_WRITE,
+		    MAP_ANON | MAP_PRIVATE | MAP_FIXED, -1, 0);
+	ASSERT_NE(ptr2, MAP_FAILED);
+	ASSERT_TRUE(find_vma_procmap(procmap, ptr));
+	ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr);
+	ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr + 2 * page_size);
+
+	/* Unmap the second half of this merged VMA. */
+	ASSERT_EQ(munmap(ptr2, page_size), 0);
+
+	/* OK, now enable global KSM merge. We clear this on test teardown. */
+	err = prctl(PR_SET_MEMORY_MERGE, 1, 0, 0, 0);
+	if (err == -1) {
+		int errnum = errno;
+
+		/* Only non-failure case... */
+		ASSERT_EQ(errnum, EINVAL);
+		/* ...but indicates we should skip. */
+		SKIP(return, "KSM memory merging not supported, skipping.");
+	}
+
+	/*
+	 * Now map a VMA adjacent to the existing that was just made
+	 * VM_MERGEABLE, this should merge as well.
+	 */
+	ptr2 = mmap(&carveout[2 * page_size], page_size,
+		    PROT_READ | PROT_WRITE,
+		    MAP_ANON | MAP_PRIVATE | MAP_FIXED, -1, 0);
+	ASSERT_NE(ptr2, MAP_FAILED);
+	ASSERT_TRUE(find_vma_procmap(procmap, ptr));
+	ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr);
+	ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr + 2 * page_size);
+
+	/* Now this VMA altogether. */
+	ASSERT_EQ(munmap(ptr, 2 * page_size), 0);
+
+	/* Try the same operation as before, asserting this also merges fine. */
+	ptr = mmap(&carveout[page_size], page_size, PROT_READ | PROT_WRITE,
+		   MAP_ANON | MAP_PRIVATE | MAP_FIXED, -1, 0);
+	ASSERT_NE(ptr, MAP_FAILED);
+	ptr2 = mmap(&carveout[2 * page_size], page_size,
+		    PROT_READ | PROT_WRITE,
+		    MAP_ANON | MAP_PRIVATE | MAP_FIXED, -1, 0);
+	ASSERT_NE(ptr2, MAP_FAILED);
+	ASSERT_TRUE(find_vma_procmap(procmap, ptr));
+	ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr);
+	ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr + 2 * page_size);
+}
+
 TEST_HARNESS_MAIN
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 0/4] mm: ksm: prevent KSM from entirely breaking VMA merging
  2025-05-21 18:20 [PATCH 0/4] mm: ksm: prevent KSM from entirely breaking VMA merging Lorenzo Stoakes
                   ` (3 preceding siblings ...)
  2025-05-21 18:20 ` [PATCH v2 4/4] tools/testing/selftests: add VMA merge tests for KSM merge Lorenzo Stoakes
@ 2025-05-21 18:23 ` Lorenzo Stoakes
  4 siblings, 0 replies; 20+ messages in thread
From: Lorenzo Stoakes @ 2025-05-21 18:23 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Christian Brauner, Jan Kara, Liam R . Howlett,
	Vlastimil Babka, Jann Horn, Pedro Falcato, David Hildenbrand,
	Xu Xin, Chengming Zhou, linux-mm, linux-kernel, linux-fsdevel,
	Stefan Roesch

And in the annals of 'hit send then immediately notice mistake' we have
entry number 1,379 :P

This subject should read [PATCH v2 0/4] obviously, like the patches that
reply to it in-thread.

Apologies for that!

Other than that, series should be fine :>)

Cheers, Lorenzo

On Wed, May 21, 2025 at 07:20:27PM +0100, Lorenzo Stoakes wrote:
> When KSM-by-default is established using prctl(PR_SET_MEMORY_MERGE), this
> defaults all newly mapped VMAs to having VM_MERGEABLE set, and thus makes
> them available to KSM for samepage merging. It also sets VM_MERGEABLE in
> all existing VMAs.
>
> However this causes an issue upon mapping of new VMAs - the initial flags
> will never have VM_MERGEABLE set when attempting a merge with adjacent VMAs
> (this is set later in the mmap() logic), and adjacent VMAs will ALWAYS have
> VM_MERGEABLE set.
>
> This renders literally all VMAs in the virtual address space unmergeable.
>
> To avoid this, this series performs the check for PR_SET_MEMORY_MERGE far
> earlier in the mmap() logic, prior to the merge being attempted.
>
> However we run into a complexity with the depreciated .mmap() callback - if
> a driver hooks this, it might change flags thus adjusting KSM merge
> eligibility.
>
> This isn't a problem for brk(), where the VMA must be anonymous. However
> for mmap() we are conservative - if the VMA is anonymous then we can always
> proceed, however if not, we permit only shmem mappings and drivers which
> implement .mmap_prepare().
>
> If we can't be sure of the driver changing things, then we maintain the
> same behaviour of performing the KSM check later in the mmap() logic (and
> thus losing VMA mergeability).
>
> Since the .mmap_prepare() hook is invoked prior to the KSM check, this
> means we can always perform the KSM check early if it is present. Over time
> as drivers are converted, we can do away with the later check altogether.
>
> A great many use-cases for this logic will use anonymous or shmem memory at
> any rate, as KSM is not supported for the page cache, and the driver
> outliers in question are MAP_PRIVATE mappings of these files.
>
> So this change should already cover the majority of actual KSM use-cases.
>
> v2:
> * Removed unnecessary ret local variable in ksm_vma_flags() as per David.
> * added Stefan Roesch in cc and added Fixes tag as per Andrew, David.
> * Propagated tags (thanks everyone!)
> * Removed unnecessary !CONFIG_KSM ksm_add_vma() stub from
>   include/linux/ksm.h.
> * Added missing !CONFIG_KSM ksm_vma_flags() stub in
>   include/linux/ksm.h.
> * After discussion with David, I've decided to defer removing the
>   VM_SPECIAL case for KSM, we can address this in a follow-up series.
> * Expanded 3/4 commit message to reference KSM eligibility vs. merging and
>   referenced future plans to permit KSM for VM_SPECIAL VMAs.
>
> v1:
> https://lore.kernel.org/all/cover.1747431920.git.lorenzo.stoakes@oracle.com/
>
> Lorenzo Stoakes (4):
>   mm: ksm: have KSM VMA checks not require a VMA pointer
>   mm: ksm: refer to special VMAs via VM_SPECIAL in ksm_compatible()
>   mm: prevent KSM from completely breaking VMA merging
>   tools/testing/selftests: add VMA merge tests for KSM merge
>
>  include/linux/fs.h                 |  7 ++-
>  include/linux/ksm.h                |  8 +--
>  mm/ksm.c                           | 49 ++++++++++++-------
>  mm/vma.c                           | 49 ++++++++++++++++++-
>  tools/testing/selftests/mm/merge.c | 78 ++++++++++++++++++++++++++++++
>  5 files changed, 167 insertions(+), 24 deletions(-)
>
> --
> 2.49.0
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 3/4] mm: prevent KSM from completely breaking VMA merging
  2025-05-21 18:20 ` [PATCH v2 3/4] mm: prevent KSM from completely breaking VMA merging Lorenzo Stoakes
@ 2025-05-26 13:33   ` David Hildenbrand
  2025-05-26 14:32   ` Liam R. Howlett
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 20+ messages in thread
From: David Hildenbrand @ 2025-05-26 13:33 UTC (permalink / raw)
  To: Lorenzo Stoakes, Andrew Morton
  Cc: Alexander Viro, Christian Brauner, Jan Kara, Liam R . Howlett,
	Vlastimil Babka, Jann Horn, Pedro Falcato, Xu Xin, Chengming Zhou,
	linux-mm, linux-kernel, linux-fsdevel, Stefan Roesch

On 21.05.25 20:20, Lorenzo Stoakes wrote:
> If a user wishes to enable KSM mergeability for an entire process and all
> fork/exec'd processes that come after it, they use the prctl()
> PR_SET_MEMORY_MERGE operation.
> 
> This defaults all newly mapped VMAs to have the VM_MERGEABLE VMA flag set
> (in order to indicate they are KSM mergeable), as well as setting this flag
> for all existing VMAs.
> 
> However it also entirely and completely breaks VMA merging for the process
> and all forked (and fork/exec'd) processes.
> 
> This is because when a new mapping is proposed, the flags specified will
> never have VM_MERGEABLE set. However all adjacent VMAs will already have
> VM_MERGEABLE set, rendering VMAs unmergeable by default.
> 
> To work around this, we try to set the VM_MERGEABLE flag prior to
> attempting a merge. In the case of brk() this can always be done.
> 
> However on mmap() things are more complicated - while KSM is not supported
> for file-backed mappings, it is supported for MAP_PRIVATE file-backed
> mappings.
> 
> And these mappings may have deprecated .mmap() callbacks specified which
> could, in theory, adjust flags and thus KSM eligiblity.
> 
> This is unlikely to cause an issue on merge, as any adjacent file-backed
> mappings would already have the same post-.mmap() callback attributes, and
> thus would naturally not be merged.
> 
> But for the purposes of establishing a VMA as KSM-eligible (as well as
> initially scanning the VMA), this is potentially very problematic.
> 
> So we check to determine whether this at all possible. If not, we set
> VM_MERGEABLE prior to the merge attempt on mmap(), otherwise we retain the
> previous behaviour.
> 
> When .mmap_prepare() is more widely used, we can remove this precaution.
> 
> While this doesn't quite cover all cases, it covers a great many (all
> anonymous memory, for instance), meaning we should already see a
> significant improvement in VMA mergeability.
> 
> Since, when it comes to file-backed mappings (other than shmem) we are
> really only interested in MAP_PRIVATE mappings which have an available anon
> page by default. Therefore, the VM_SPECIAL restriction makes less sense for
> KSM.
> 
> In a future series we therefore intend to remove this limitation, which
> ought to simplify this implementation. However it makes sense to defer
> doing so until a later stage so we can first address this mergeability
> issue.
> 
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Fixes: d7597f59d1d3 ("mm: add new api to enable ksm per process") # please no backport!
> Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>


Acked-by: David Hildenbrand <david@redhat.com>


-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 1/4] mm: ksm: have KSM VMA checks not require a VMA pointer
  2025-05-21 18:20 ` [PATCH v2 1/4] mm: ksm: have KSM VMA checks not require a VMA pointer Lorenzo Stoakes
@ 2025-05-26 14:31   ` Liam R. Howlett
  2025-05-28 15:41   ` xu.xin16
  2025-05-29 13:59   ` Vlastimil Babka
  2 siblings, 0 replies; 20+ messages in thread
From: Liam R. Howlett @ 2025-05-26 14:31 UTC (permalink / raw)
  To: Lorenzo Stoakes
  Cc: Andrew Morton, Alexander Viro, Christian Brauner, Jan Kara,
	Vlastimil Babka, Jann Horn, Pedro Falcato, David Hildenbrand,
	Xu Xin, Chengming Zhou, linux-mm, linux-kernel, linux-fsdevel,
	Stefan Roesch

* Lorenzo Stoakes <lorenzo.stoakes@oracle.com> [250521 14:20]:
> In subsequent commits we are going to determine KSM eligibility prior to a
> VMA being constructed, at which point we will of course not yet have access
> to a VMA pointer.
> 
> It is trivial to boil down the check logic to be parameterised on
> mm_struct, file and VMA flags, so do so.
> 
> As a part of this change, additionally expose and use file_is_dax() to
> determine whether a file is being mapped under a DAX inode.
> 
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Acked-by: David Hildenbrand <david@redhat.com>
> Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>

Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>

> ---
>  include/linux/fs.h |  7 ++++++-
>  mm/ksm.c           | 32 ++++++++++++++++++++------------
>  2 files changed, 26 insertions(+), 13 deletions(-)
> 
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index 09c8495dacdb..e1397e2b55ea 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -3691,9 +3691,14 @@ void setattr_copy(struct mnt_idmap *, struct inode *inode,
>  
>  extern int file_update_time(struct file *file);
>  
> +static inline bool file_is_dax(const struct file *file)
> +{
> +	return file && IS_DAX(file->f_mapping->host);
> +}
> +
>  static inline bool vma_is_dax(const struct vm_area_struct *vma)
>  {
> -	return vma->vm_file && IS_DAX(vma->vm_file->f_mapping->host);
> +	return file_is_dax(vma->vm_file);
>  }
>  
>  static inline bool vma_is_fsdax(struct vm_area_struct *vma)
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 8583fb91ef13..08d486f188ff 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -677,28 +677,33 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr, bool lock_v
>  	return (ret & VM_FAULT_OOM) ? -ENOMEM : 0;
>  }
>  
> -static bool vma_ksm_compatible(struct vm_area_struct *vma)
> +static bool ksm_compatible(const struct file *file, vm_flags_t vm_flags)
>  {
> -	if (vma->vm_flags & (VM_SHARED  | VM_MAYSHARE   | VM_PFNMAP  |
> -			     VM_IO      | VM_DONTEXPAND | VM_HUGETLB |
> -			     VM_MIXEDMAP| VM_DROPPABLE))
> +	if (vm_flags & (VM_SHARED   | VM_MAYSHARE   | VM_PFNMAP  |
> +			VM_IO       | VM_DONTEXPAND | VM_HUGETLB |
> +			VM_MIXEDMAP | VM_DROPPABLE))
>  		return false;		/* just ignore the advice */
>  
> -	if (vma_is_dax(vma))
> +	if (file_is_dax(file))
>  		return false;
>  
>  #ifdef VM_SAO
> -	if (vma->vm_flags & VM_SAO)
> +	if (vm_flags & VM_SAO)
>  		return false;
>  #endif
>  #ifdef VM_SPARC_ADI
> -	if (vma->vm_flags & VM_SPARC_ADI)
> +	if (vm_flags & VM_SPARC_ADI)
>  		return false;
>  #endif
>  
>  	return true;
>  }
>  
> +static bool vma_ksm_compatible(struct vm_area_struct *vma)
> +{
> +	return ksm_compatible(vma->vm_file, vma->vm_flags);
> +}
> +
>  static struct vm_area_struct *find_mergeable_vma(struct mm_struct *mm,
>  		unsigned long addr)
>  {
> @@ -2696,14 +2701,17 @@ static int ksm_scan_thread(void *nothing)
>  	return 0;
>  }
>  
> -static void __ksm_add_vma(struct vm_area_struct *vma)
> +static bool __ksm_should_add_vma(const struct file *file, vm_flags_t vm_flags)
>  {
> -	unsigned long vm_flags = vma->vm_flags;
> -
>  	if (vm_flags & VM_MERGEABLE)
> -		return;
> +		return false;
> +
> +	return ksm_compatible(file, vm_flags);
> +}
>  
> -	if (vma_ksm_compatible(vma))
> +static void __ksm_add_vma(struct vm_area_struct *vma)
> +{
> +	if (__ksm_should_add_vma(vma->vm_file, vma->vm_flags))
>  		vm_flags_set(vma, VM_MERGEABLE);
>  }
>  
> -- 
> 2.49.0
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 2/4] mm: ksm: refer to special VMAs via VM_SPECIAL in ksm_compatible()
  2025-05-21 18:20 ` [PATCH v2 2/4] mm: ksm: refer to special VMAs via VM_SPECIAL in ksm_compatible() Lorenzo Stoakes
@ 2025-05-26 14:31   ` Liam R. Howlett
  2025-05-28 15:43   ` xu.xin16
  2025-05-29 14:01   ` Vlastimil Babka
  2 siblings, 0 replies; 20+ messages in thread
From: Liam R. Howlett @ 2025-05-26 14:31 UTC (permalink / raw)
  To: Lorenzo Stoakes
  Cc: Andrew Morton, Alexander Viro, Christian Brauner, Jan Kara,
	Vlastimil Babka, Jann Horn, Pedro Falcato, David Hildenbrand,
	Xu Xin, Chengming Zhou, linux-mm, linux-kernel, linux-fsdevel,
	Stefan Roesch

* Lorenzo Stoakes <lorenzo.stoakes@oracle.com> [250521 14:20]:
> There's no need to spell out all the special cases, also doing it this way
> makes it absolutely clear that we preclude unmergeable VMAs in general, and
> puts the other excluded flags in stark and clear contrast.
> 
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Acked-by: David Hildenbrand <david@redhat.com>
> Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>

Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>

> ---
>  mm/ksm.c | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 08d486f188ff..d0c763abd499 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -679,9 +679,8 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr, bool lock_v
>  
>  static bool ksm_compatible(const struct file *file, vm_flags_t vm_flags)
>  {
> -	if (vm_flags & (VM_SHARED   | VM_MAYSHARE   | VM_PFNMAP  |
> -			VM_IO       | VM_DONTEXPAND | VM_HUGETLB |
> -			VM_MIXEDMAP | VM_DROPPABLE))
> +	if (vm_flags & (VM_SHARED  | VM_MAYSHARE | VM_SPECIAL |
> +			VM_HUGETLB | VM_DROPPABLE))
>  		return false;		/* just ignore the advice */
>  
>  	if (file_is_dax(file))
> -- 
> 2.49.0
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 3/4] mm: prevent KSM from completely breaking VMA merging
  2025-05-21 18:20 ` [PATCH v2 3/4] mm: prevent KSM from completely breaking VMA merging Lorenzo Stoakes
  2025-05-26 13:33   ` David Hildenbrand
@ 2025-05-26 14:32   ` Liam R. Howlett
  2025-05-28 15:38   ` xu.xin16
  2025-05-29 14:50   ` Vlastimil Babka
  3 siblings, 0 replies; 20+ messages in thread
From: Liam R. Howlett @ 2025-05-26 14:32 UTC (permalink / raw)
  To: Lorenzo Stoakes
  Cc: Andrew Morton, Alexander Viro, Christian Brauner, Jan Kara,
	Vlastimil Babka, Jann Horn, Pedro Falcato, David Hildenbrand,
	Xu Xin, Chengming Zhou, linux-mm, linux-kernel, linux-fsdevel,
	Stefan Roesch

* Lorenzo Stoakes <lorenzo.stoakes@oracle.com> [250521 14:20]:
> If a user wishes to enable KSM mergeability for an entire process and all
> fork/exec'd processes that come after it, they use the prctl()
> PR_SET_MEMORY_MERGE operation.
> 
> This defaults all newly mapped VMAs to have the VM_MERGEABLE VMA flag set
> (in order to indicate they are KSM mergeable), as well as setting this flag
> for all existing VMAs.
> 
> However it also entirely and completely breaks VMA merging for the process
> and all forked (and fork/exec'd) processes.
> 
> This is because when a new mapping is proposed, the flags specified will
> never have VM_MERGEABLE set. However all adjacent VMAs will already have
> VM_MERGEABLE set, rendering VMAs unmergeable by default.
> 
> To work around this, we try to set the VM_MERGEABLE flag prior to
> attempting a merge. In the case of brk() this can always be done.
> 
> However on mmap() things are more complicated - while KSM is not supported
> for file-backed mappings, it is supported for MAP_PRIVATE file-backed
> mappings.
> 
> And these mappings may have deprecated .mmap() callbacks specified which
> could, in theory, adjust flags and thus KSM eligiblity.
> 
> This is unlikely to cause an issue on merge, as any adjacent file-backed
> mappings would already have the same post-.mmap() callback attributes, and
> thus would naturally not be merged.
> 
> But for the purposes of establishing a VMA as KSM-eligible (as well as
> initially scanning the VMA), this is potentially very problematic.
> 
> So we check to determine whether this at all possible. If not, we set
> VM_MERGEABLE prior to the merge attempt on mmap(), otherwise we retain the
> previous behaviour.
> 
> When .mmap_prepare() is more widely used, we can remove this precaution.
> 
> While this doesn't quite cover all cases, it covers a great many (all
> anonymous memory, for instance), meaning we should already see a
> significant improvement in VMA mergeability.
> 
> Since, when it comes to file-backed mappings (other than shmem) we are
> really only interested in MAP_PRIVATE mappings which have an available anon
> page by default. Therefore, the VM_SPECIAL restriction makes less sense for
> KSM.
> 
> In a future series we therefore intend to remove this limitation, which
> ought to simplify this implementation. However it makes sense to defer
> doing so until a later stage so we can first address this mergeability
> issue.
> 
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Fixes: d7597f59d1d3 ("mm: add new api to enable ksm per process") # please no backport!
> Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>

Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>

> ---
>  include/linux/ksm.h |  8 +++++---
>  mm/ksm.c            | 18 +++++++++++------
>  mm/vma.c            | 49 +++++++++++++++++++++++++++++++++++++++++++--
>  3 files changed, 64 insertions(+), 11 deletions(-)
> 
> diff --git a/include/linux/ksm.h b/include/linux/ksm.h
> index d73095b5cd96..51787f0b0208 100644
> --- a/include/linux/ksm.h
> +++ b/include/linux/ksm.h
> @@ -17,8 +17,8 @@
>  #ifdef CONFIG_KSM
>  int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
>  		unsigned long end, int advice, unsigned long *vm_flags);
> -
> -void ksm_add_vma(struct vm_area_struct *vma);
> +vm_flags_t ksm_vma_flags(const struct mm_struct *mm, const struct file *file,
> +			 vm_flags_t vm_flags);
>  int ksm_enable_merge_any(struct mm_struct *mm);
>  int ksm_disable_merge_any(struct mm_struct *mm);
>  int ksm_disable(struct mm_struct *mm);
> @@ -97,8 +97,10 @@ bool ksm_process_mergeable(struct mm_struct *mm);
> 
>  #else  /* !CONFIG_KSM */
> 
> -static inline void ksm_add_vma(struct vm_area_struct *vma)
> +static inline vm_flags_t ksm_vma_flags(const struct mm_struct *mm,
> +		const struct file *file, vm_flags_t vm_flags)
>  {
> +	return vm_flags;
>  }
> 
>  static inline int ksm_disable(struct mm_struct *mm)
> diff --git a/mm/ksm.c b/mm/ksm.c
> index d0c763abd499..18b3690bb69a 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -2731,16 +2731,22 @@ static int __ksm_del_vma(struct vm_area_struct *vma)
>  	return 0;
>  }
>  /**
> - * ksm_add_vma - Mark vma as mergeable if compatible
> + * ksm_vma_flags - Update VMA flags to mark as mergeable if compatible
>   *
> - * @vma:  Pointer to vma
> + * @mm:       Proposed VMA's mm_struct
> + * @file:     Proposed VMA's file-backed mapping, if any.
> + * @vm_flags: Proposed VMA"s flags.
> + *
> + * Returns: @vm_flags possibly updated to mark mergeable.
>   */
> -void ksm_add_vma(struct vm_area_struct *vma)
> +vm_flags_t ksm_vma_flags(const struct mm_struct *mm, const struct file *file,
> +			 vm_flags_t vm_flags)
>  {
> -	struct mm_struct *mm = vma->vm_mm;
> +	if (test_bit(MMF_VM_MERGE_ANY, &mm->flags) &&
> +	    __ksm_should_add_vma(file, vm_flags))
> +		vm_flags |= VM_MERGEABLE;
> 
> -	if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
> -		__ksm_add_vma(vma);
> +	return vm_flags;
>  }
> 
>  static void ksm_add_vmas(struct mm_struct *mm)
> diff --git a/mm/vma.c b/mm/vma.c
> index 3ff6cfbe3338..5bebe55ea737 100644
> --- a/mm/vma.c
> +++ b/mm/vma.c
> @@ -2482,7 +2482,6 @@ static int __mmap_new_vma(struct mmap_state *map, struct vm_area_struct **vmap)
>  	 */
>  	if (!vma_is_anonymous(vma))
>  		khugepaged_enter_vma(vma, map->flags);
> -	ksm_add_vma(vma);
>  	*vmap = vma;
>  	return 0;
> 
> @@ -2585,6 +2584,45 @@ static void set_vma_user_defined_fields(struct vm_area_struct *vma,
>  	vma->vm_private_data = map->vm_private_data;
>  }
> 
> +static void update_ksm_flags(struct mmap_state *map)
> +{
> +	map->flags = ksm_vma_flags(map->mm, map->file, map->flags);
> +}
> +
> +/*
> + * Are we guaranteed no driver can change state such as to preclude KSM merging?
> + * If so, let's set the KSM mergeable flag early so we don't break VMA merging.
> + *
> + * This is applicable when PR_SET_MEMORY_MERGE has been set on the mm_struct via
> + * prctl() causing newly mapped VMAs to have the KSM mergeable VMA flag set.
> + *
> + * If this is not the case, then we set the flag after considering mergeability,
> + * which will prevent mergeability as, when PR_SET_MEMORY_MERGE is set, a new
> + * VMA will not have the KSM mergeability VMA flag set, but all other VMAs will,
> + * preventing any merge.
> + */
> +static bool can_set_ksm_flags_early(struct mmap_state *map)
> +{
> +	struct file *file = map->file;
> +
> +	/* Anonymous mappings have no driver which can change them. */
> +	if (!file)
> +		return true;
> +
> +	/* shmem is safe. */
> +	if (shmem_file(file))
> +		return true;
> +
> +	/*
> +	 * If .mmap_prepare() is specified, then the driver will have already
> +	 * manipulated state prior to updating KSM flags.
> +	 */
> +	if (file->f_op->mmap_prepare)
> +		return true;
> +
> +	return false;
> +}
> +
>  static unsigned long __mmap_region(struct file *file, unsigned long addr,
>  		unsigned long len, vm_flags_t vm_flags, unsigned long pgoff,
>  		struct list_head *uf)
> @@ -2595,6 +2633,7 @@ static unsigned long __mmap_region(struct file *file, unsigned long addr,
>  	bool have_mmap_prepare = file && file->f_op->mmap_prepare;
>  	VMA_ITERATOR(vmi, mm, addr);
>  	MMAP_STATE(map, mm, &vmi, addr, len, pgoff, vm_flags, file);
> +	bool check_ksm_early = can_set_ksm_flags_early(&map);
> 
>  	error = __mmap_prepare(&map, uf);
>  	if (!error && have_mmap_prepare)
> @@ -2602,6 +2641,9 @@ static unsigned long __mmap_region(struct file *file, unsigned long addr,
>  	if (error)
>  		goto abort_munmap;
> 
> +	if (check_ksm_early)
> +		update_ksm_flags(&map);
> +
>  	/* Attempt to merge with adjacent VMAs... */
>  	if (map.prev || map.next) {
>  		VMG_MMAP_STATE(vmg, &map, /* vma = */ NULL);
> @@ -2611,6 +2653,9 @@ static unsigned long __mmap_region(struct file *file, unsigned long addr,
> 
>  	/* ...but if we can't, allocate a new VMA. */
>  	if (!vma) {
> +		if (!check_ksm_early)
> +			update_ksm_flags(&map);
> +
>  		error = __mmap_new_vma(&map, &vma);
>  		if (error)
>  			goto unacct_error;
> @@ -2713,6 +2758,7 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,
>  	 * Note: This happens *after* clearing old mappings in some code paths.
>  	 */
>  	flags |= VM_DATA_DEFAULT_FLAGS | VM_ACCOUNT | mm->def_flags;
> +	flags = ksm_vma_flags(mm, NULL, flags);
>  	if (!may_expand_vm(mm, flags, len >> PAGE_SHIFT))
>  		return -ENOMEM;
> 
> @@ -2756,7 +2802,6 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,
> 
>  	mm->map_count++;
>  	validate_mm(mm);
> -	ksm_add_vma(vma);

Well, that was in the wrong place.

>  out:
>  	perf_event_mmap(vma);
>  	mm->total_vm += len >> PAGE_SHIFT;
> --
> 2.49.0

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 4/4] tools/testing/selftests: add VMA merge tests for KSM merge
  2025-05-21 18:20 ` [PATCH v2 4/4] tools/testing/selftests: add VMA merge tests for KSM merge Lorenzo Stoakes
@ 2025-05-26 14:34   ` Liam R. Howlett
  0 siblings, 0 replies; 20+ messages in thread
From: Liam R. Howlett @ 2025-05-26 14:34 UTC (permalink / raw)
  To: Lorenzo Stoakes
  Cc: Andrew Morton, Alexander Viro, Christian Brauner, Jan Kara,
	Vlastimil Babka, Jann Horn, Pedro Falcato, David Hildenbrand,
	Xu Xin, Chengming Zhou, linux-mm, linux-kernel, linux-fsdevel,
	Stefan Roesch

* Lorenzo Stoakes <lorenzo.stoakes@oracle.com> [250521 14:20]:
> Add test to assert that we have now allowed merging of VMAs when KSM
> merging-by-default has been set by prctl(PR_SET_MEMORY_MERGE, ...).
> 
> We simply perform a trivial mapping of adjacent VMAs expecting a merge,
> however prior to recent changes implementing this mode earlier than before,
> these merges would not have succeeded.
> 
> Assert that we have fixed this!
> 
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>
> Tested-by: Chengming Zhou <chengming.zhou@linux.dev>

Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>

> ---
>  tools/testing/selftests/mm/merge.c | 78 ++++++++++++++++++++++++++++++
>  1 file changed, 78 insertions(+)
> 
> diff --git a/tools/testing/selftests/mm/merge.c b/tools/testing/selftests/mm/merge.c
> index c76646cdf6e6..2380a5a6a529 100644
> --- a/tools/testing/selftests/mm/merge.c
> +++ b/tools/testing/selftests/mm/merge.c
> @@ -2,10 +2,12 @@
>  
>  #define _GNU_SOURCE
>  #include "../kselftest_harness.h"
> +#include <linux/prctl.h>
>  #include <stdio.h>
>  #include <stdlib.h>
>  #include <unistd.h>
>  #include <sys/mman.h>
> +#include <sys/prctl.h>
>  #include <sys/wait.h>
>  #include "vm_util.h"
>  
> @@ -31,6 +33,11 @@ FIXTURE_TEARDOWN(merge)
>  {
>  	ASSERT_EQ(munmap(self->carveout, 12 * self->page_size), 0);
>  	ASSERT_EQ(close_procmap(&self->procmap), 0);
> +	/*
> +	 * Clear unconditionally, as some tests set this. It is no issue if this
> +	 * fails (KSM may be disabled for instance).
> +	 */
> +	prctl(PR_SET_MEMORY_MERGE, 0, 0, 0, 0);
>  }
>  
>  TEST_F(merge, mprotect_unfaulted_left)
> @@ -452,4 +459,75 @@ TEST_F(merge, forked_source_vma)
>  	ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr2 + 5 * page_size);
>  }
>  
> +TEST_F(merge, ksm_merge)
> +{
> +	unsigned int page_size = self->page_size;
> +	char *carveout = self->carveout;
> +	struct procmap_fd *procmap = &self->procmap;
> +	char *ptr, *ptr2;
> +	int err;
> +
> +	/*
> +	 * Map two R/W immediately adjacent to one another, they should
> +	 * trivially merge:
> +	 *
> +	 * |-----------|-----------|
> +	 * |    R/W    |    R/W    |
> +	 * |-----------|-----------|
> +	 *      ptr         ptr2
> +	 */
> +
> +	ptr = mmap(&carveout[page_size], page_size, PROT_READ | PROT_WRITE,
> +		   MAP_ANON | MAP_PRIVATE | MAP_FIXED, -1, 0);
> +	ASSERT_NE(ptr, MAP_FAILED);
> +	ptr2 = mmap(&carveout[2 * page_size], page_size,
> +		    PROT_READ | PROT_WRITE,
> +		    MAP_ANON | MAP_PRIVATE | MAP_FIXED, -1, 0);
> +	ASSERT_NE(ptr2, MAP_FAILED);
> +	ASSERT_TRUE(find_vma_procmap(procmap, ptr));
> +	ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr);
> +	ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr + 2 * page_size);
> +
> +	/* Unmap the second half of this merged VMA. */
> +	ASSERT_EQ(munmap(ptr2, page_size), 0);
> +
> +	/* OK, now enable global KSM merge. We clear this on test teardown. */
> +	err = prctl(PR_SET_MEMORY_MERGE, 1, 0, 0, 0);
> +	if (err == -1) {
> +		int errnum = errno;
> +
> +		/* Only non-failure case... */
> +		ASSERT_EQ(errnum, EINVAL);
> +		/* ...but indicates we should skip. */
> +		SKIP(return, "KSM memory merging not supported, skipping.");
> +	}
> +
> +	/*
> +	 * Now map a VMA adjacent to the existing that was just made
> +	 * VM_MERGEABLE, this should merge as well.
> +	 */
> +	ptr2 = mmap(&carveout[2 * page_size], page_size,
> +		    PROT_READ | PROT_WRITE,
> +		    MAP_ANON | MAP_PRIVATE | MAP_FIXED, -1, 0);
> +	ASSERT_NE(ptr2, MAP_FAILED);
> +	ASSERT_TRUE(find_vma_procmap(procmap, ptr));
> +	ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr);
> +	ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr + 2 * page_size);
> +
> +	/* Now this VMA altogether. */
> +	ASSERT_EQ(munmap(ptr, 2 * page_size), 0);
> +
> +	/* Try the same operation as before, asserting this also merges fine. */
> +	ptr = mmap(&carveout[page_size], page_size, PROT_READ | PROT_WRITE,
> +		   MAP_ANON | MAP_PRIVATE | MAP_FIXED, -1, 0);
> +	ASSERT_NE(ptr, MAP_FAILED);
> +	ptr2 = mmap(&carveout[2 * page_size], page_size,
> +		    PROT_READ | PROT_WRITE,
> +		    MAP_ANON | MAP_PRIVATE | MAP_FIXED, -1, 0);
> +	ASSERT_NE(ptr2, MAP_FAILED);
> +	ASSERT_TRUE(find_vma_procmap(procmap, ptr));
> +	ASSERT_EQ(procmap->query.vma_start, (unsigned long)ptr);
> +	ASSERT_EQ(procmap->query.vma_end, (unsigned long)ptr + 2 * page_size);
> +}
> +
>  TEST_HARNESS_MAIN
> -- 
> 2.49.0
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 3/4] mm: prevent KSM from completely breaking VMA merging
  2025-05-21 18:20 ` [PATCH v2 3/4] mm: prevent KSM from completely breaking VMA merging Lorenzo Stoakes
  2025-05-26 13:33   ` David Hildenbrand
  2025-05-26 14:32   ` Liam R. Howlett
@ 2025-05-28 15:38   ` xu.xin16
  2025-05-28 15:50     ` Lorenzo Stoakes
  2025-05-29 14:50   ` Vlastimil Babka
  3 siblings, 1 reply; 20+ messages in thread
From: xu.xin16 @ 2025-05-28 15:38 UTC (permalink / raw)
  To: lorenzo.stoakes
  Cc: akpm, viro, brauner, jack, Liam.Howlett, vbabka, jannh, pfalcato,
	david, chengming.zhou, linux-mm, linux-kernel, linux-fsdevel, shr,
	wang.yaxin, yang.yang29

> +static void update_ksm_flags(struct mmap_state *map)
> +{
> +	map->flags = ksm_vma_flags(map->mm, map->file, map->flags);
> +}
> +
> +/*
> + * Are we guaranteed no driver can change state such as to preclude KSM merging?
> + * If so, let's set the KSM mergeable flag early so we don't break VMA merging.
> + *
> + * This is applicable when PR_SET_MEMORY_MERGE has been set on the mm_struct via
> + * prctl() causing newly mapped VMAs to have the KSM mergeable VMA flag set.
> + *
> + * If this is not the case, then we set the flag after considering mergeability,
> + * which will prevent mergeability as, when PR_SET_MEMORY_MERGE is set, a new
> + * VMA will not have the KSM mergeability VMA flag set, but all other VMAs will,
> + * preventing any merge.
> + */
> +static bool can_set_ksm_flags_early(struct mmap_state *map)
> +{
> +	struct file *file = map->file;
> +
> +	/* Anonymous mappings have no driver which can change them. */
> +	if (!file)
> +		return true;
> +
> +	/* shmem is safe. */

Excuse me, why it's safe here? Does KSM support shmem?

> +	if (shmem_file(file))
> +		return true;
> +
> +	/*
> +	 * If .mmap_prepare() is specified, then the driver will have already
> +	 * manipulated state prior to updating KSM flags.
> +	 */

Recommend expanding the comments here with slightly more verbose explanations to improve
code comprehension. Consider adding the following note (even though your commit log is
already sufficiently clear.   :)
/*
* If .mmap_prepare() is specified, then the driver will have already
* manipulated state prior to updating KSM flags. So no need to worry
* about mmap callbacks modifying vm_flags after the KSM flag has been
* updated here, which could otherwise affect KSM eligibility.
*/


> +	if (file->f_op->mmap_prepare)
> +		return true;
> +
> +	return false;
> +}
> +

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 1/4] mm: ksm: have KSM VMA checks not require a VMA pointer
  2025-05-21 18:20 ` [PATCH v2 1/4] mm: ksm: have KSM VMA checks not require a VMA pointer Lorenzo Stoakes
  2025-05-26 14:31   ` Liam R. Howlett
@ 2025-05-28 15:41   ` xu.xin16
  2025-05-29 13:59   ` Vlastimil Babka
  2 siblings, 0 replies; 20+ messages in thread
From: xu.xin16 @ 2025-05-28 15:41 UTC (permalink / raw)
  To: lorenzo.stoakes
  Cc: akpm, viro, brauner, jack, Liam.Howlett, vbabka, jannh, pfalcato,
	david, chengming.zhou, linux-mm, linux-kernel, linux-fsdevel, shr,
	wang.yaxin, yang.yang29

> In subsequent commits we are going to determine KSM eligibility prior to a
> VMA being constructed, at which point we will of course not yet have access
> to a VMA pointer.
> 
> It is trivial to boil down the check logic to be parameterised on
> mm_struct, file and VMA flags, so do so.
> 
> As a part of this change, additionally expose and use file_is_dax() to
> determine whether a file is being mapped under a DAX inode.
> 
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Acked-by: David Hildenbrand <david@redhat.com>
> Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>
> ---
>  include/linux/fs.h |  7 ++++++-
>  mm/ksm.c           | 32 ++++++++++++++++++++------------
>  2 files changed, 26 insertions(+), 13 deletions(-)

All looks good to me.

Reviewed-by: Xu Xin <xu.xin16@zte.com.cn>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 2/4] mm: ksm: refer to special VMAs via VM_SPECIAL in ksm_compatible()
  2025-05-21 18:20 ` [PATCH v2 2/4] mm: ksm: refer to special VMAs via VM_SPECIAL in ksm_compatible() Lorenzo Stoakes
  2025-05-26 14:31   ` Liam R. Howlett
@ 2025-05-28 15:43   ` xu.xin16
  2025-05-29 14:01   ` Vlastimil Babka
  2 siblings, 0 replies; 20+ messages in thread
From: xu.xin16 @ 2025-05-28 15:43 UTC (permalink / raw)
  To: lorenzo.stoakes
  Cc: akpm, viro, brauner, jack, Liam.Howlett, vbabka, jannh, pfalcato,
	david, chengming.zhou, linux-mm, linux-kernel, linux-fsdevel, shr,
	wang.yaxin, yang.yang29

> There's no need to spell out all the special cases, also doing it this way
> makes it absolutely clear that we preclude unmergeable VMAs in general, and
> puts the other excluded flags in stark and clear contrast.
> 
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Acked-by: David Hildenbrand <david@redhat.com>
> Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>
> ---
>  mm/ksm.c | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 08d486f188ff..d0c763abd499 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -679,9 +679,8 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr, bool lock_v
>  
>  static bool ksm_compatible(const struct file *file, vm_flags_t vm_flags)
>  {
> -	if (vm_flags & (VM_SHARED   | VM_MAYSHARE   | VM_PFNMAP  |
> -			VM_IO       | VM_DONTEXPAND | VM_HUGETLB |
> -			VM_MIXEDMAP | VM_DROPPABLE))
> +	if (vm_flags & (VM_SHARED  | VM_MAYSHARE | VM_SPECIAL |
> +			VM_HUGETLB | VM_DROPPABLE))
>  		return false;		/* just ignore the advice */
>  
>  	if (file_is_dax(file))
> -- 
> 2.49.0

Reviewed-by: Xu Xin <xu.xin16@zte.com.cn>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 3/4] mm: prevent KSM from completely breaking VMA merging
  2025-05-28 15:38   ` xu.xin16
@ 2025-05-28 15:50     ` Lorenzo Stoakes
  2025-05-29 16:30       ` Lorenzo Stoakes
  0 siblings, 1 reply; 20+ messages in thread
From: Lorenzo Stoakes @ 2025-05-28 15:50 UTC (permalink / raw)
  To: xu.xin16
  Cc: akpm, viro, brauner, jack, Liam.Howlett, vbabka, jannh, pfalcato,
	david, chengming.zhou, linux-mm, linux-kernel, linux-fsdevel, shr,
	wang.yaxin, yang.yang29

On Wed, May 28, 2025 at 11:38:32PM +0800, xu.xin16@zte.com.cn wrote:
> > +static void update_ksm_flags(struct mmap_state *map)
> > +{
> > +	map->flags = ksm_vma_flags(map->mm, map->file, map->flags);
> > +}
> > +
> > +/*
> > + * Are we guaranteed no driver can change state such as to preclude KSM merging?
> > + * If so, let's set the KSM mergeable flag early so we don't break VMA merging.
> > + *
> > + * This is applicable when PR_SET_MEMORY_MERGE has been set on the mm_struct via
> > + * prctl() causing newly mapped VMAs to have the KSM mergeable VMA flag set.
> > + *
> > + * If this is not the case, then we set the flag after considering mergeability,
> > + * which will prevent mergeability as, when PR_SET_MEMORY_MERGE is set, a new
> > + * VMA will not have the KSM mergeability VMA flag set, but all other VMAs will,
> > + * preventing any merge.
> > + */
> > +static bool can_set_ksm_flags_early(struct mmap_state *map)
> > +{
> > +	struct file *file = map->file;
> > +
> > +	/* Anonymous mappings have no driver which can change them. */
> > +	if (!file)
> > +		return true;
> > +
> > +	/* shmem is safe. */
>
> Excuse me, why it's safe here? Does KSM support shmem?

Because shmem_mmap() doesn't do anything which would invalidate the KSM.

Yeah I think I misinterpreted actually - looks like shmem isn't supported
(otherwise VM_SHARED would be set rendering the VMA incompatible), _but_
as with all file-backed mappings, MAP_PRIVATE mappings _are_.

So this is still relevant :)

>
> > +	if (shmem_file(file))
> > +		return true;
> > +
> > +	/*
> > +	 * If .mmap_prepare() is specified, then the driver will have already
> > +	 * manipulated state prior to updating KSM flags.
> > +	 */
>
> Recommend expanding the comments here with slightly more verbose explanations to improve
> code comprehension. Consider adding the following note (even though your commit log is
> already sufficiently clear.   :)
> /*
> * If .mmap_prepare() is specified, then the driver will have already
> * manipulated state prior to updating KSM flags. So no need to worry
> * about mmap callbacks modifying vm_flags after the KSM flag has been
> * updated here, which could otherwise affect KSM eligibility.
> */

While this comment is really nice actually, I think we're probably ok with the
shorter version given the commit log goes into substantial detail.

>
>
> > +	if (file->f_op->mmap_prepare)
> > +		return true;
> > +
> > +	return false;
> > +}
> > +

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 1/4] mm: ksm: have KSM VMA checks not require a VMA pointer
  2025-05-21 18:20 ` [PATCH v2 1/4] mm: ksm: have KSM VMA checks not require a VMA pointer Lorenzo Stoakes
  2025-05-26 14:31   ` Liam R. Howlett
  2025-05-28 15:41   ` xu.xin16
@ 2025-05-29 13:59   ` Vlastimil Babka
  2 siblings, 0 replies; 20+ messages in thread
From: Vlastimil Babka @ 2025-05-29 13:59 UTC (permalink / raw)
  To: Lorenzo Stoakes, Andrew Morton
  Cc: Alexander Viro, Christian Brauner, Jan Kara, Liam R . Howlett,
	Jann Horn, Pedro Falcato, David Hildenbrand, Xu Xin,
	Chengming Zhou, linux-mm, linux-kernel, linux-fsdevel,
	Stefan Roesch

On 5/21/25 20:20, Lorenzo Stoakes wrote:
> In subsequent commits we are going to determine KSM eligibility prior to a
> VMA being constructed, at which point we will of course not yet have access
> to a VMA pointer.
> 
> It is trivial to boil down the check logic to be parameterised on
> mm_struct, file and VMA flags, so do so.
> 
> As a part of this change, additionally expose and use file_is_dax() to
> determine whether a file is being mapped under a DAX inode.
> 
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Acked-by: David Hildenbrand <david@redhat.com>
> Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>

Reviewed-by: Vlastimil Babka <vbabka@suse.cz>


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 2/4] mm: ksm: refer to special VMAs via VM_SPECIAL in ksm_compatible()
  2025-05-21 18:20 ` [PATCH v2 2/4] mm: ksm: refer to special VMAs via VM_SPECIAL in ksm_compatible() Lorenzo Stoakes
  2025-05-26 14:31   ` Liam R. Howlett
  2025-05-28 15:43   ` xu.xin16
@ 2025-05-29 14:01   ` Vlastimil Babka
  2 siblings, 0 replies; 20+ messages in thread
From: Vlastimil Babka @ 2025-05-29 14:01 UTC (permalink / raw)
  To: Lorenzo Stoakes, Andrew Morton
  Cc: Alexander Viro, Christian Brauner, Jan Kara, Liam R . Howlett,
	Jann Horn, Pedro Falcato, David Hildenbrand, Xu Xin,
	Chengming Zhou, linux-mm, linux-kernel, linux-fsdevel,
	Stefan Roesch

On 5/21/25 20:20, Lorenzo Stoakes wrote:
> There's no need to spell out all the special cases, also doing it this way
> makes it absolutely clear that we preclude unmergeable VMAs in general, and
> puts the other excluded flags in stark and clear contrast.
> 
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Acked-by: David Hildenbrand <david@redhat.com>
> Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>

Reviewed-by: Vlastimil Babka <vbabka@suse.cz>


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 3/4] mm: prevent KSM from completely breaking VMA merging
  2025-05-21 18:20 ` [PATCH v2 3/4] mm: prevent KSM from completely breaking VMA merging Lorenzo Stoakes
                     ` (2 preceding siblings ...)
  2025-05-28 15:38   ` xu.xin16
@ 2025-05-29 14:50   ` Vlastimil Babka
  2025-05-29 15:39     ` Lorenzo Stoakes
  3 siblings, 1 reply; 20+ messages in thread
From: Vlastimil Babka @ 2025-05-29 14:50 UTC (permalink / raw)
  To: Lorenzo Stoakes, Andrew Morton
  Cc: Alexander Viro, Christian Brauner, Jan Kara, Liam R . Howlett,
	Jann Horn, Pedro Falcato, David Hildenbrand, Xu Xin,
	Chengming Zhou, linux-mm, linux-kernel, linux-fsdevel,
	Stefan Roesch

On 5/21/25 20:20, Lorenzo Stoakes wrote:
> If a user wishes to enable KSM mergeability for an entire process and all
> fork/exec'd processes that come after it, they use the prctl()
> PR_SET_MEMORY_MERGE operation.
> 
> This defaults all newly mapped VMAs to have the VM_MERGEABLE VMA flag set
> (in order to indicate they are KSM mergeable), as well as setting this flag
> for all existing VMAs.
> 
> However it also entirely and completely breaks VMA merging for the process
> and all forked (and fork/exec'd) processes.

I think merging due to e.g. mprotect() should still work, but for new VMAs,
yeah.

> This is because when a new mapping is proposed, the flags specified will
> never have VM_MERGEABLE set. However all adjacent VMAs will already have
> VM_MERGEABLE set, rendering VMAs unmergeable by default.
> 
> To work around this, we try to set the VM_MERGEABLE flag prior to
> attempting a merge. In the case of brk() this can always be done.
> 
> However on mmap() things are more complicated - while KSM is not supported
> for file-backed mappings, it is supported for MAP_PRIVATE file-backed

     ^ insert "shared" to make it obvious?

> mappings.
> 
> And these mappings may have deprecated .mmap() callbacks specified which
> could, in theory, adjust flags and thus KSM eligiblity.

Right, however your can_set_ksm_flags_early() isn't testing exactly that?
More on that there.

> This is unlikely to cause an issue on merge, as any adjacent file-backed
> mappings would already have the same post-.mmap() callback attributes, and
> thus would naturally not be merged.

I'm getting a bit lost as two kinds of merging have to be discussed. If the
vma's around have the same afftributes, they would be VMA-merged, no?

> But for the purposes of establishing a VMA as KSM-eligible (as well as
> initially scanning the VMA), this is potentially very problematic.

This part I understand as we have to check if we can add VM_MERGEABLE after
mmap() has adjusted the flags, as it might have an effect on the result of
ksm_compatible()?

> So we check to determine whether this at all possible. If not, we set
> VM_MERGEABLE prior to the merge attempt on mmap(), otherwise we retain the
> previous behaviour.
> 
> When .mmap_prepare() is more widely used, we can remove this precaution.
> 
> While this doesn't quite cover all cases, it covers a great many (all
> anonymous memory, for instance), meaning we should already see a
> significant improvement in VMA mergeability.
> 
> Since, when it comes to file-backed mappings (other than shmem) we are
> really only interested in MAP_PRIVATE mappings which have an available anon
> page by default. Therefore, the VM_SPECIAL restriction makes less sense for
> KSM.
> 
> In a future series we therefore intend to remove this limitation, which
> ought to simplify this implementation. However it makes sense to defer
> doing so until a later stage so we can first address this mergeability
> issue.
> 
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Fixes: d7597f59d1d3 ("mm: add new api to enable ksm per process") # please no backport!
> Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>

<snip>

> +/*
> + * Are we guaranteed no driver can change state such as to preclude KSM merging?
> + * If so, let's set the KSM mergeable flag early so we don't break VMA merging.
> + *
> + * This is applicable when PR_SET_MEMORY_MERGE has been set on the mm_struct via
> + * prctl() causing newly mapped VMAs to have the KSM mergeable VMA flag set.
> + *
> + * If this is not the case, then we set the flag after considering mergeability,

								     ^ "VMA"

> + * which will prevent mergeability as, when PR_SET_MEMORY_MERGE is set, a new

			^ "VMA"

> + * VMA will not have the KSM mergeability VMA flag set, but all other VMAs will,
> + * preventing any merge.

		    ^ "VMA"

tedious I know, but more obvious, IMHO

> + */
> +static bool can_set_ksm_flags_early(struct mmap_state *map)
> +{
> +	struct file *file = map->file;
> +
> +	/* Anonymous mappings have no driver which can change them. */
> +	if (!file)
> +		return true;
> +
> +	/* shmem is safe. */
> +	if (shmem_file(file))
> +		return true;
> +
> +	/*
> +	 * If .mmap_prepare() is specified, then the driver will have already
> +	 * manipulated state prior to updating KSM flags.
> +	 */
> +	if (file->f_op->mmap_prepare)
> +		return true;
> +
> +	return false;

So back to my reply in the commit log, why test for mmap_prepare and
otherwise assume false, and not instead test for f_op->mmap which would
result in false, and otherwise return true? Or am I assuming wrong that
there are f_ops that have neither of those two callbacks?

> +}
> +
>  static unsigned long __mmap_region(struct file *file, unsigned long addr,
>  		unsigned long len, vm_flags_t vm_flags, unsigned long pgoff,
>  		struct list_head *uf)
> @@ -2595,6 +2633,7 @@ static unsigned long __mmap_region(struct file *file, unsigned long addr,
>  	bool have_mmap_prepare = file && file->f_op->mmap_prepare;
>  	VMA_ITERATOR(vmi, mm, addr);
>  	MMAP_STATE(map, mm, &vmi, addr, len, pgoff, vm_flags, file);
> +	bool check_ksm_early = can_set_ksm_flags_early(&map);
> 
>  	error = __mmap_prepare(&map, uf);
>  	if (!error && have_mmap_prepare)
> @@ -2602,6 +2641,9 @@ static unsigned long __mmap_region(struct file *file, unsigned long addr,
>  	if (error)
>  		goto abort_munmap;
> 
> +	if (check_ksm_early)
> +		update_ksm_flags(&map);
> +
>  	/* Attempt to merge with adjacent VMAs... */
>  	if (map.prev || map.next) {
>  		VMG_MMAP_STATE(vmg, &map, /* vma = */ NULL);
> @@ -2611,6 +2653,9 @@ static unsigned long __mmap_region(struct file *file, unsigned long addr,
> 
>  	/* ...but if we can't, allocate a new VMA. */
>  	if (!vma) {
> +		if (!check_ksm_early)
> +			update_ksm_flags(&map);
> +
>  		error = __mmap_new_vma(&map, &vma);
>  		if (error)
>  			goto unacct_error;
> @@ -2713,6 +2758,7 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,
>  	 * Note: This happens *after* clearing old mappings in some code paths.
>  	 */
>  	flags |= VM_DATA_DEFAULT_FLAGS | VM_ACCOUNT | mm->def_flags;
> +	flags = ksm_vma_flags(mm, NULL, flags);
>  	if (!may_expand_vm(mm, flags, len >> PAGE_SHIFT))
>  		return -ENOMEM;
> 
> @@ -2756,7 +2802,6 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,
> 
>  	mm->map_count++;
>  	validate_mm(mm);
> -	ksm_add_vma(vma);
>  out:
>  	perf_event_mmap(vma);
>  	mm->total_vm += len >> PAGE_SHIFT;
> --
> 2.49.0


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 3/4] mm: prevent KSM from completely breaking VMA merging
  2025-05-29 14:50   ` Vlastimil Babka
@ 2025-05-29 15:39     ` Lorenzo Stoakes
  0 siblings, 0 replies; 20+ messages in thread
From: Lorenzo Stoakes @ 2025-05-29 15:39 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Andrew Morton, Alexander Viro, Christian Brauner, Jan Kara,
	Liam R . Howlett, Jann Horn, Pedro Falcato, David Hildenbrand,
	Xu Xin, Chengming Zhou, linux-mm, linux-kernel, linux-fsdevel,
	Stefan Roesch

On Thu, May 29, 2025 at 04:50:16PM +0200, Vlastimil Babka wrote:
> On 5/21/25 20:20, Lorenzo Stoakes wrote:
> > If a user wishes to enable KSM mergeability for an entire process and all
> > fork/exec'd processes that come after it, they use the prctl()
> > PR_SET_MEMORY_MERGE operation.
> >
> > This defaults all newly mapped VMAs to have the VM_MERGEABLE VMA flag set
> > (in order to indicate they are KSM mergeable), as well as setting this flag
> > for all existing VMAs.
> >
> > However it also entirely and completely breaks VMA merging for the process
> > and all forked (and fork/exec'd) processes.
>
> I think merging due to e.g. mprotect() should still work, but for new VMAs,
> yeah.

Yes for new VMAs. I'll update the cover letter subject line + commit
messages accordingly... I may have over-egged the pudding, but it's still
really serious.

But you're right, this is misleading, it's not _all_ merging, it's just
_all merging of new mappings, ever_. Which is still you know. Not great :)

>
> > This is because when a new mapping is proposed, the flags specified will
> > never have VM_MERGEABLE set. However all adjacent VMAs will already have
> > VM_MERGEABLE set, rendering VMAs unmergeable by default.
> >
> > To work around this, we try to set the VM_MERGEABLE flag prior to
> > attempting a merge. In the case of brk() this can always be done.
> >
> > However on mmap() things are more complicated - while KSM is not supported
> > for file-backed mappings, it is supported for MAP_PRIVATE file-backed
>
>      ^ insert "shared" to make it obvious?

Good spot, this is confusing as-is, will fixup on respin.

>
> > mappings.
> >
> > And these mappings may have deprecated .mmap() callbacks specified which
> > could, in theory, adjust flags and thus KSM eligiblity.
>
> Right, however your can_set_ksm_flags_early() isn't testing exactly that?
> More on that there.

It's testing to see whether we are in a known case where you can go ahead
and set VM_MERGEABLE because either you know .mmap can't change _KSM_
mergabe eligibility, or it won't be invoked so can't hurt us.

Realistically almost certainly the only cases where this applies are ones
where VM_PFNMAP + friends are set, which a bunch of drivers do.

David actually proposes to stop disallowing this for KSM, at which point we
can drop this function anyway.

But that's best done as a follow-up.

>
> > This is unlikely to cause an issue on merge, as any adjacent file-backed
> > mappings would already have the same post-.mmap() callback attributes, and
> > thus would naturally not be merged.
>
> I'm getting a bit lost as two kinds of merging have to be discussed. If the
> vma's around have the same afftributes, they would be VMA-merged, no?

The overloading of this term is very annoying.

But yeah I need to drop this bit, the VMA mergeability isn't really
applicable - I'll explain why...

My concern was that you'd set VM_MERGEABLE then attempt mergeability then
get merged with an adjacent VMA.

But _later_ the .mmap() hook, had the merge not occurred, would have set
some flags that would have made the prior merge invalid (oopsy!)

However this isn't correct.

The vma->vm_file would need to be the same for both, and therefore any
adjacent VMA would already have had .mmap() called and had their VMA flags
changed.

And therefore TL;DR I should drop this bit from the commit message...

>
> > But for the purposes of establishing a VMA as KSM-eligible (as well as
> > initially scanning the VMA), this is potentially very problematic.
>
> This part I understand as we have to check if we can add VM_MERGEABLE after
> mmap() has adjusted the flags, as it might have an effect on the result of
> ksm_compatible()?

Yes.

>
> > So we check to determine whether this at all possible. If not, we set
> > VM_MERGEABLE prior to the merge attempt on mmap(), otherwise we retain the
> > previous behaviour.
> >
> > When .mmap_prepare() is more widely used, we can remove this precaution.
> >
> > While this doesn't quite cover all cases, it covers a great many (all
> > anonymous memory, for instance), meaning we should already see a
> > significant improvement in VMA mergeability.
> >
> > Since, when it comes to file-backed mappings (other than shmem) we are
> > really only interested in MAP_PRIVATE mappings which have an available anon
> > page by default. Therefore, the VM_SPECIAL restriction makes less sense for
> > KSM.
> >
> > In a future series we therefore intend to remove this limitation, which
> > ought to simplify this implementation. However it makes sense to defer
> > doing so until a later stage so we can first address this mergeability
> > issue.
> >
> > Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> > Fixes: d7597f59d1d3 ("mm: add new api to enable ksm per process") # please no backport!
> > Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>
>
> <snip>
>
> > +/*
> > + * Are we guaranteed no driver can change state such as to preclude KSM merging?
> > + * If so, let's set the KSM mergeable flag early so we don't break VMA merging.
> > + *
> > + * This is applicable when PR_SET_MEMORY_MERGE has been set on the mm_struct via
> > + * prctl() causing newly mapped VMAs to have the KSM mergeable VMA flag set.
> > + *
> > + * If this is not the case, then we set the flag after considering mergeability,
>
> 								     ^ "VMA"
>
> > + * which will prevent mergeability as, when PR_SET_MEMORY_MERGE is set, a new
>
> 			^ "VMA"
>
> > + * VMA will not have the KSM mergeability VMA flag set, but all other VMAs will,
> > + * preventing any merge.
>
> 		    ^ "VMA"
>
> tedious I know, but more obvious, IMHO

Ack will fixup.

>
> > + */
> > +static bool can_set_ksm_flags_early(struct mmap_state *map)
> > +{
> > +	struct file *file = map->file;
> > +
> > +	/* Anonymous mappings have no driver which can change them. */
> > +	if (!file)
> > +		return true;
> > +
> > +	/* shmem is safe. */
> > +	if (shmem_file(file))
> > +		return true;
> > +
> > +	/*
> > +	 * If .mmap_prepare() is specified, then the driver will have already
> > +	 * manipulated state prior to updating KSM flags.
> > +	 */
> > +	if (file->f_op->mmap_prepare)
> > +		return true;
> > +
> > +	return false;
>
> So back to my reply in the commit log, why test for mmap_prepare and
> otherwise assume false, and not instead test for f_op->mmap which would
> result in false, and otherwise return true? Or am I assuming wrong that
> there are f_ops that have neither of those two callbacks?

Because shmem has .mmap() set but we know it's safe.

I mean, we should probably put the mmap_prepare check before shmem_file()
to make things clearer.

We plan to drop this function soon (see above) anyway. Just being mighty
cautious.

>
> > +}
> > +
> >  static unsigned long __mmap_region(struct file *file, unsigned long addr,
> >  		unsigned long len, vm_flags_t vm_flags, unsigned long pgoff,
> >  		struct list_head *uf)
> > @@ -2595,6 +2633,7 @@ static unsigned long __mmap_region(struct file *file, unsigned long addr,
> >  	bool have_mmap_prepare = file && file->f_op->mmap_prepare;
> >  	VMA_ITERATOR(vmi, mm, addr);
> >  	MMAP_STATE(map, mm, &vmi, addr, len, pgoff, vm_flags, file);
> > +	bool check_ksm_early = can_set_ksm_flags_early(&map);
> >
> >  	error = __mmap_prepare(&map, uf);
> >  	if (!error && have_mmap_prepare)
> > @@ -2602,6 +2641,9 @@ static unsigned long __mmap_region(struct file *file, unsigned long addr,
> >  	if (error)
> >  		goto abort_munmap;
> >
> > +	if (check_ksm_early)
> > +		update_ksm_flags(&map);
> > +
> >  	/* Attempt to merge with adjacent VMAs... */
> >  	if (map.prev || map.next) {
> >  		VMG_MMAP_STATE(vmg, &map, /* vma = */ NULL);
> > @@ -2611,6 +2653,9 @@ static unsigned long __mmap_region(struct file *file, unsigned long addr,
> >
> >  	/* ...but if we can't, allocate a new VMA. */
> >  	if (!vma) {
> > +		if (!check_ksm_early)
> > +			update_ksm_flags(&map);
> > +
> >  		error = __mmap_new_vma(&map, &vma);
> >  		if (error)
> >  			goto unacct_error;
> > @@ -2713,6 +2758,7 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,
> >  	 * Note: This happens *after* clearing old mappings in some code paths.
> >  	 */
> >  	flags |= VM_DATA_DEFAULT_FLAGS | VM_ACCOUNT | mm->def_flags;
> > +	flags = ksm_vma_flags(mm, NULL, flags);
> >  	if (!may_expand_vm(mm, flags, len >> PAGE_SHIFT))
> >  		return -ENOMEM;
> >
> > @@ -2756,7 +2802,6 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,
> >
> >  	mm->map_count++;
> >  	validate_mm(mm);
> > -	ksm_add_vma(vma);
> >  out:
> >  	perf_event_mmap(vma);
> >  	mm->total_vm += len >> PAGE_SHIFT;
> > --
> > 2.49.0
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 3/4] mm: prevent KSM from completely breaking VMA merging
  2025-05-28 15:50     ` Lorenzo Stoakes
@ 2025-05-29 16:30       ` Lorenzo Stoakes
  0 siblings, 0 replies; 20+ messages in thread
From: Lorenzo Stoakes @ 2025-05-29 16:30 UTC (permalink / raw)
  To: xu.xin16
  Cc: akpm, viro, brauner, jack, Liam.Howlett, vbabka, jannh, pfalcato,
	david, chengming.zhou, linux-mm, linux-kernel, linux-fsdevel, shr,
	wang.yaxin, yang.yang29

On Wed, May 28, 2025 at 04:50:18PM +0100, Lorenzo Stoakes wrote:
> On Wed, May 28, 2025 at 11:38:32PM +0800, xu.xin16@zte.com.cn wrote:
> > > +static void update_ksm_flags(struct mmap_state *map)
> > > +{
> > > +	map->flags = ksm_vma_flags(map->mm, map->file, map->flags);
> > > +}
> > > +
> > > +/*
> > > + * Are we guaranteed no driver can change state such as to preclude KSM merging?
> > > + * If so, let's set the KSM mergeable flag early so we don't break VMA merging.
> > > + *
> > > + * This is applicable when PR_SET_MEMORY_MERGE has been set on the mm_struct via
> > > + * prctl() causing newly mapped VMAs to have the KSM mergeable VMA flag set.
> > > + *
> > > + * If this is not the case, then we set the flag after considering mergeability,
> > > + * which will prevent mergeability as, when PR_SET_MEMORY_MERGE is set, a new
> > > + * VMA will not have the KSM mergeability VMA flag set, but all other VMAs will,
> > > + * preventing any merge.
> > > + */
> > > +static bool can_set_ksm_flags_early(struct mmap_state *map)
> > > +{
> > > +	struct file *file = map->file;
> > > +
> > > +	/* Anonymous mappings have no driver which can change them. */
> > > +	if (!file)
> > > +		return true;
> > > +
> > > +	/* shmem is safe. */
> >
> > Excuse me, why it's safe here? Does KSM support shmem?
>
> Because shmem_mmap() doesn't do anything which would invalidate the KSM.
>
> Yeah I think I misinterpreted actually - looks like shmem isn't supported
> (otherwise VM_SHARED would be set rendering the VMA incompatible), _but_
> as with all file-backed mappings, MAP_PRIVATE mappings _are_.
>
> So this is still relevant :)

Will update the commit message to correct the erroneous reference to shmem.

>
> >
> > > +	if (shmem_file(file))
> > > +		return true;
> > > +
> > > +	/*
> > > +	 * If .mmap_prepare() is specified, then the driver will have already
> > > +	 * manipulated state prior to updating KSM flags.
> > > +	 */
> >
> > Recommend expanding the comments here with slightly more verbose explanations to improve
> > code comprehension. Consider adding the following note (even though your commit log is
> > already sufficiently clear.   :)
> > /*
> > * If .mmap_prepare() is specified, then the driver will have already
> > * manipulated state prior to updating KSM flags. So no need to worry
> > * about mmap callbacks modifying vm_flags after the KSM flag has been
> > * updated here, which could otherwise affect KSM eligibility.
> > */
>
> While this comment is really nice actually, I think we're probably ok with the
> shorter version given the commit log goes into substantial detail.

Actually on second thoughts, as I'm respinning, I'll update this and replace
with (a slightly adjusted version of) your excellent comment! :)

>
> >
> >
> > > +	if (file->f_op->mmap_prepare)
> > > +		return true;
> > > +
> > > +	return false;
> > > +}
> > > +

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2025-05-29 16:31 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-21 18:20 [PATCH 0/4] mm: ksm: prevent KSM from entirely breaking VMA merging Lorenzo Stoakes
2025-05-21 18:20 ` [PATCH v2 1/4] mm: ksm: have KSM VMA checks not require a VMA pointer Lorenzo Stoakes
2025-05-26 14:31   ` Liam R. Howlett
2025-05-28 15:41   ` xu.xin16
2025-05-29 13:59   ` Vlastimil Babka
2025-05-21 18:20 ` [PATCH v2 2/4] mm: ksm: refer to special VMAs via VM_SPECIAL in ksm_compatible() Lorenzo Stoakes
2025-05-26 14:31   ` Liam R. Howlett
2025-05-28 15:43   ` xu.xin16
2025-05-29 14:01   ` Vlastimil Babka
2025-05-21 18:20 ` [PATCH v2 3/4] mm: prevent KSM from completely breaking VMA merging Lorenzo Stoakes
2025-05-26 13:33   ` David Hildenbrand
2025-05-26 14:32   ` Liam R. Howlett
2025-05-28 15:38   ` xu.xin16
2025-05-28 15:50     ` Lorenzo Stoakes
2025-05-29 16:30       ` Lorenzo Stoakes
2025-05-29 14:50   ` Vlastimil Babka
2025-05-29 15:39     ` Lorenzo Stoakes
2025-05-21 18:20 ` [PATCH v2 4/4] tools/testing/selftests: add VMA merge tests for KSM merge Lorenzo Stoakes
2025-05-26 14:34   ` Liam R. Howlett
2025-05-21 18:23 ` [PATCH 0/4] mm: ksm: prevent KSM from entirely breaking VMA merging Lorenzo Stoakes

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).