* [PATCH v3 0/3] mm: process/cgroup ksm support
@ 2023-02-24 4:39 Stefan Roesch
2023-02-24 4:39 ` [PATCH v3 1/3] mm: add new api to enable ksm per process Stefan Roesch
` (4 more replies)
0 siblings, 5 replies; 17+ messages in thread
From: Stefan Roesch @ 2023-02-24 4:39 UTC (permalink / raw)
To: kernel-team
Cc: shr, linux-mm, riel, mhocko, david, linux-kselftest, linux-doc,
akpm, hannes
So far KSM can only be enabled by calling madvise for memory regions. To
be able to use KSM for more workloads, KSM needs to have the ability to be
enabled / disabled at the process / cgroup level.
Use case 1:
The madvise call is not available in the programming language. An example for
this are programs with forked workloads using a garbage collected language without
pointers. In such a language madvise cannot be made available.
In addition the addresses of objects get moved around as they are garbage
collected. KSM sharing needs to be enabled "from the outside" for these type of
workloads.
Use case 2:
The same interpreter can also be used for workloads where KSM brings no
benefit or even has overhead. We'd like to be able to enable KSM on a workload
by workload basis.
Use case 3:
With the madvise call sharing opportunities are only enabled for the current
process: it is a workload-local decision. A considerable number of sharing
opportuniites may exist across multiple workloads or jobs. Only a higler level
entity like a job scheduler or container can know for certain if its running
one or more instances of a job. That job scheduler however doesn't have
the necessary internal worklaod knowledge to make targeted madvise calls.
Security concerns:
In previous discussions security concerns have been brought up. The problem is
that an individual workload does not have the knowledge about what else is
running on a machine. Therefore it has to be very conservative in what memory
areas can be shared or not. However, if the system is dedicated to running
multiple jobs within the same security domain, its the job scheduler that has
the knowledge that sharing can be safely enabled and is even desirable.
Performance:
Experiments with using UKSM have shown a capacity increase of around 20%.
1. New options for prctl system command
This patch series adds two new options to the prctl system call. The first
one allows to enable KSM at the process level and the second one to query the
setting.
The setting will be inherited by child processes.
With the above setting, KSM can be enabled for the seed process of a cgroup
and all processes in the cgroup will inherit the setting.
2. Changes to KSM processing
When KSM is enabled at the process level, the KSM code will iterate over all
the VMA's and enable KSM for the eligible VMA's.
When forking a process that has KSM enabled, the setting will be inherited by
the new child process.
In addition when KSM is disabled for a process, KSM will be disabled for the
VMA's where KSM has been enabled.
3. Add general_profit metric
The general_profit metric of KSM is specified in the documentation, but not
calculated. This adds the general profit metric to /sys/kernel/debug/mm/ksm.
4. Add more metrics to ksm_stat
This adds the process profit and ksm type metric to /proc/<pid>/ksm_stat.
5. Add more tests to ksm_tests
This adds an option to specify the merge type to the ksm_tests. This allows to
test madvise and prctl KSM. It also adds a new option to query if prctl KSM has
been enabled. It adds a fork test to verify that the KSM process setting is
inherited by client processes.
Changes:
- V3:
- folded patch 1 - 6
- folded patch 7 - 14
- folded patch 15 - 19
- Expanded on the use cases in the cover letter
- Added a section on security concerns to the cover letter
- V2:
- Added use cases to the cover letter
- Removed the tracing patch from the patch series and posted it as an
individual patch
- Refreshed repo
Stefan Roesch (3):
mm: add new api to enable ksm per process
mm: add new KSM process and sysfs knobs
selftests/mm: add new selftests for KSM
Documentation/ABI/testing/sysfs-kernel-mm-ksm | 8 +
Documentation/admin-guide/mm/ksm.rst | 8 +-
fs/proc/base.c | 5 +
include/linux/ksm.h | 19 +-
include/linux/sched/coredump.h | 1 +
include/uapi/linux/prctl.h | 2 +
kernel/sys.c | 29 ++
mm/ksm.c | 114 +++++++-
tools/include/uapi/linux/prctl.h | 2 +
tools/testing/selftests/mm/Makefile | 3 +-
tools/testing/selftests/mm/ksm_tests.c | 254 +++++++++++++++---
11 files changed, 389 insertions(+), 56 deletions(-)
base-commit: 234a68e24b120b98875a8b6e17a9dead277be16a
--
2.30.2
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v3 1/3] mm: add new api to enable ksm per process
2023-02-24 4:39 [PATCH v3 0/3] mm: process/cgroup ksm support Stefan Roesch
@ 2023-02-24 4:39 ` Stefan Roesch
2023-03-08 16:47 ` Johannes Weiner
2023-02-24 4:39 ` [PATCH v3 2/3] mm: add new KSM process and sysfs knobs Stefan Roesch
` (3 subsequent siblings)
4 siblings, 1 reply; 17+ messages in thread
From: Stefan Roesch @ 2023-02-24 4:39 UTC (permalink / raw)
To: kernel-team
Cc: shr, linux-mm, riel, mhocko, david, linux-kselftest, linux-doc,
akpm, hannes
This adds a new prctl to API to enable and disable KSM on a per process
basis instead of only at the VMA basis (with madvise).
1) Introduce new MMF_VM_MERGE_ANY flag
This introduces the new flag MMF_VM_MERGE_ANY flag. When this flag is
set, kernel samepage merging (ksm) gets enabled for all vma's of a
process.
2) add flag to __ksm_enter
This change adds the flag parameter to __ksm_enter. This allows to
distinguish if ksm was called by prctl or madvise.
3) add flag to __ksm_exit call
This adds the flag parameter to the __ksm_exit() call. This allows to
distinguish if this call is for an prctl or madvise invocation.
4) invoke madvise for all vmas in scan_get_next_rmap_item
If the new flag MMF_VM_MERGE_ANY has been set for a process, iterate
over all the vmas and enable ksm if possible. For the vmas that can be
ksm enabled this is only done once.
5) support disabling of ksm for a process
This adds the ability to disable ksm for a process if ksm has been
enabled for the process.
6) add new prctl option to get and set ksm for a process
This adds two new options to the prctl system call
- enable ksm for all vmas of a process (if the vmas support it).
- query if ksm has been enabled for a process.
Signed-off-by: Stefan Roesch <shr@devkernel.io>
---
include/linux/ksm.h | 14 ++++---
include/linux/sched/coredump.h | 1 +
include/uapi/linux/prctl.h | 2 +
kernel/sys.c | 29 +++++++++++++++
mm/ksm.c | 67 ++++++++++++++++++++++++++++++----
5 files changed, 101 insertions(+), 12 deletions(-)
diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index 7e232ba59b86..d38a05a36298 100644
--- a/include/linux/ksm.h
+++ b/include/linux/ksm.h
@@ -18,20 +18,24 @@
#ifdef CONFIG_KSM
int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
unsigned long end, int advice, unsigned long *vm_flags);
-int __ksm_enter(struct mm_struct *mm);
-void __ksm_exit(struct mm_struct *mm);
+int __ksm_enter(struct mm_struct *mm, int flag);
+void __ksm_exit(struct mm_struct *mm, int flag);
static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
{
+ if (test_bit(MMF_VM_MERGE_ANY, &oldmm->flags))
+ return __ksm_enter(mm, MMF_VM_MERGE_ANY);
if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags))
- return __ksm_enter(mm);
+ return __ksm_enter(mm, MMF_VM_MERGEABLE);
return 0;
}
static inline void ksm_exit(struct mm_struct *mm)
{
- if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
- __ksm_exit(mm);
+ if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
+ __ksm_exit(mm, MMF_VM_MERGE_ANY);
+ else if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
+ __ksm_exit(mm, MMF_VM_MERGEABLE);
}
/*
diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredump.h
index 0e17ae7fbfd3..0ee96ea7a0e9 100644
--- a/include/linux/sched/coredump.h
+++ b/include/linux/sched/coredump.h
@@ -90,4 +90,5 @@ static inline int get_dumpable(struct mm_struct *mm)
#define MMF_INIT_MASK (MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\
MMF_DISABLE_THP_MASK | MMF_HAS_MDWE_MASK)
+#define MMF_VM_MERGE_ANY 29
#endif /* _LINUX_SCHED_COREDUMP_H */
diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
index 1312a137f7fb..759b3f53e53f 100644
--- a/include/uapi/linux/prctl.h
+++ b/include/uapi/linux/prctl.h
@@ -290,4 +290,6 @@ struct prctl_mm_map {
#define PR_SET_VMA 0x53564d41
# define PR_SET_VMA_ANON_NAME 0
+#define PR_SET_MEMORY_MERGE 67
+#define PR_GET_MEMORY_MERGE 68
#endif /* _LINUX_PRCTL_H */
diff --git a/kernel/sys.c b/kernel/sys.c
index b3cab94545ed..495bab3ed2ad 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -15,6 +15,7 @@
#include <linux/highuid.h>
#include <linux/fs.h>
#include <linux/kmod.h>
+#include <linux/ksm.h>
#include <linux/perf_event.h>
#include <linux/resource.h>
#include <linux/kernel.h>
@@ -2659,6 +2660,34 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
case PR_SET_VMA:
error = prctl_set_vma(arg2, arg3, arg4, arg5);
break;
+#ifdef CONFIG_KSM
+ case PR_SET_MEMORY_MERGE:
+ if (!capable(CAP_SYS_RESOURCE))
+ return -EPERM;
+
+ if (arg2) {
+ if (mmap_write_lock_killable(me->mm))
+ return -EINTR;
+
+ if (test_bit(MMF_VM_MERGEABLE, &me->mm->flags))
+ error = -EINVAL;
+ else if (!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags))
+ error = __ksm_enter(me->mm, MMF_VM_MERGE_ANY);
+ mmap_write_unlock(me->mm);
+ } else {
+ __ksm_exit(me->mm, MMF_VM_MERGE_ANY);
+ }
+ break;
+ case PR_GET_MEMORY_MERGE:
+ if (!capable(CAP_SYS_RESOURCE))
+ return -EPERM;
+
+ if (arg2 || arg3 || arg4 || arg5)
+ return -EINVAL;
+
+ error = !!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags);
+ break;
+#endif
default:
error = -EINVAL;
break;
diff --git a/mm/ksm.c b/mm/ksm.c
index 56808e3bfd19..23d6944f78ad 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1063,6 +1063,7 @@ static int unmerge_and_remove_all_rmap_items(void)
mm_slot_free(mm_slot_cache, mm_slot);
clear_bit(MMF_VM_MERGEABLE, &mm->flags);
+ clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
mmdrop(mm);
} else
spin_unlock(&ksm_mmlist_lock);
@@ -2329,6 +2330,17 @@ static struct ksm_rmap_item *get_next_rmap_item(struct ksm_mm_slot *mm_slot,
return rmap_item;
}
+static bool vma_ksm_mergeable(struct vm_area_struct *vma)
+{
+ if (vma->vm_flags & VM_MERGEABLE)
+ return true;
+
+ if (test_bit(MMF_VM_MERGE_ANY, &vma->vm_mm->flags))
+ return true;
+
+ return false;
+}
+
static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
{
struct mm_struct *mm;
@@ -2405,8 +2417,20 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
goto no_vmas;
for_each_vma(vmi, vma) {
- if (!(vma->vm_flags & VM_MERGEABLE))
+ if (!vma_ksm_mergeable(vma))
continue;
+ if (!(vma->vm_flags & VM_MERGEABLE)) {
+ unsigned long flags = vma->vm_flags;
+
+ /* madvise failed, use next vma */
+ if (ksm_madvise(vma, vma->vm_start, vma->vm_end, MADV_MERGEABLE, &flags))
+ continue;
+ /* vma, not supported as being mergeable */
+ if (!(flags & VM_MERGEABLE))
+ continue;
+
+ vm_flags_set(vma, VM_MERGEABLE);
+ }
if (ksm_scan.address < vma->vm_start)
ksm_scan.address = vma->vm_start;
if (!vma->anon_vma)
@@ -2491,6 +2515,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
mm_slot_free(mm_slot_cache, mm_slot);
clear_bit(MMF_VM_MERGEABLE, &mm->flags);
+ clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
mmap_read_unlock(mm);
mmdrop(mm);
} else {
@@ -2595,8 +2620,9 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
return 0;
#endif
- if (!test_bit(MMF_VM_MERGEABLE, &mm->flags)) {
- err = __ksm_enter(mm);
+ if (!test_bit(MMF_VM_MERGEABLE, &mm->flags) &&
+ !test_bit(MMF_VM_MERGE_ANY, &mm->flags)) {
+ err = __ksm_enter(mm, MMF_VM_MERGEABLE);
if (err)
return err;
}
@@ -2622,7 +2648,7 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
}
EXPORT_SYMBOL_GPL(ksm_madvise);
-int __ksm_enter(struct mm_struct *mm)
+int __ksm_enter(struct mm_struct *mm, int flag)
{
struct ksm_mm_slot *mm_slot;
struct mm_slot *slot;
@@ -2655,7 +2681,7 @@ int __ksm_enter(struct mm_struct *mm)
list_add_tail(&slot->mm_node, &ksm_scan.mm_slot->slot.mm_node);
spin_unlock(&ksm_mmlist_lock);
- set_bit(MMF_VM_MERGEABLE, &mm->flags);
+ set_bit(flag, &mm->flags);
mmgrab(mm);
if (needs_wakeup)
@@ -2664,12 +2690,39 @@ int __ksm_enter(struct mm_struct *mm)
return 0;
}
-void __ksm_exit(struct mm_struct *mm)
+static void unmerge_vmas(struct mm_struct *mm)
+{
+ struct vm_area_struct *vma;
+ struct vma_iterator vmi;
+
+ vma_iter_init(&vmi, mm, 0);
+
+ mmap_read_lock(mm);
+ for_each_vma(vmi, vma) {
+ if (vma->vm_flags & VM_MERGEABLE) {
+ unsigned long flags = vma->vm_flags;
+
+ if (ksm_madvise(vma, vma->vm_start, vma->vm_end, MADV_UNMERGEABLE, &flags))
+ continue;
+
+ vm_flags_clear(vma, VM_MERGEABLE);
+ }
+ }
+ mmap_read_unlock(mm);
+}
+
+void __ksm_exit(struct mm_struct *mm, int flag)
{
struct ksm_mm_slot *mm_slot;
struct mm_slot *slot;
int easy_to_free = 0;
+ if (!(current->flags & PF_EXITING) && flag == MMF_VM_MERGE_ANY &&
+ test_bit(MMF_VM_MERGE_ANY, &mm->flags)) {
+ clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
+ unmerge_vmas(mm);
+ }
+
/*
* This process is exiting: if it's straightforward (as is the
* case when ksmd was never running), free mm_slot immediately.
@@ -2696,7 +2749,7 @@ void __ksm_exit(struct mm_struct *mm)
if (easy_to_free) {
mm_slot_free(mm_slot_cache, mm_slot);
- clear_bit(MMF_VM_MERGEABLE, &mm->flags);
+ clear_bit(flag, &mm->flags);
mmdrop(mm);
} else if (mm_slot) {
mmap_write_lock(mm);
--
2.30.2
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 2/3] mm: add new KSM process and sysfs knobs
2023-02-24 4:39 [PATCH v3 0/3] mm: process/cgroup ksm support Stefan Roesch
2023-02-24 4:39 ` [PATCH v3 1/3] mm: add new api to enable ksm per process Stefan Roesch
@ 2023-02-24 4:39 ` Stefan Roesch
2023-02-24 4:40 ` [PATCH v3 3/3] selftests/mm: add new selftests for KSM Stefan Roesch
` (2 subsequent siblings)
4 siblings, 0 replies; 17+ messages in thread
From: Stefan Roesch @ 2023-02-24 4:39 UTC (permalink / raw)
To: kernel-team
Cc: shr, linux-mm, riel, mhocko, david, linux-kselftest, linux-doc,
akpm, hannes, Bagas Sanjaya
This adds the general_profit KSM sysfs knob and the process profit
metric and process merge type knobs to ksm_stat.
1) split off pages_volatile function
This splits off the pages_volatile function. The next patch will use
this function.
2) expose general_profit metric
The documentation mentions a general profit metric, however this metric
is not calculated. In addition the formula depends on the size of
internal structures, which makes it more difficult for an administrator
to make the calculation. Adding the metric for a better user experience.
3) document general_profit sysfs knob
4) calculate ksm process profit metric
The ksm documentation mentions the process profit metric and how to
calculate it. This adds the calculation of the metric.
5) add ksm_merge_type() function
This adds the ksm_merge_type function. The function returns the merge
type for the process. For madvise it returns "madvise", for prctl it
returns "process" and otherwise it returns "none".
6) mm: expose ksm process profit metric and merge type in ksm_stat
This exposes the ksm process profit metric in /proc/<pid>/ksm_stat.
The documentation mentions the formula for the ksm process profit
metric, however it does not calculate it. In addition the formula
depends on the size of internal structures. So it makes sense to expose
it.
This exposes the ksm process type in /proc/<pid>/ksm_stat. The name of
the value is ksm_merge_type.
7) document new procfs ksm knobs
Signed-off-by: Stefan Roesch <shr@devkernel.io>
Reviewed-by: Bagas Sanjaya <bagasdotme@gmail.com>
---
Documentation/ABI/testing/sysfs-kernel-mm-ksm | 8 ++++
Documentation/admin-guide/mm/ksm.rst | 8 +++-
fs/proc/base.c | 5 ++
include/linux/ksm.h | 5 ++
mm/ksm.c | 47 +++++++++++++++++--
5 files changed, 69 insertions(+), 4 deletions(-)
diff --git a/Documentation/ABI/testing/sysfs-kernel-mm-ksm b/Documentation/ABI/testing/sysfs-kernel-mm-ksm
index d244674a9480..7768e90f7a8f 100644
--- a/Documentation/ABI/testing/sysfs-kernel-mm-ksm
+++ b/Documentation/ABI/testing/sysfs-kernel-mm-ksm
@@ -51,3 +51,11 @@ Description: Control merging pages across different NUMA nodes.
When it is set to 0 only pages from the same node are merged,
otherwise pages from all nodes can be merged together (default).
+
+What: /sys/kernel/mm/ksm/general_profit
+Date: January 2023
+KernelVersion: 6.1
+Contact: Linux memory management mailing list <linux-mm@kvack.org>
+Description: Measure how effective KSM is.
+ general_profit: how effective is KSM. The formula for the
+ calculation is in Documentation/admin-guide/mm/ksm.rst.
diff --git a/Documentation/admin-guide/mm/ksm.rst b/Documentation/admin-guide/mm/ksm.rst
index f160f9487a90..34f1d0396eee 100644
--- a/Documentation/admin-guide/mm/ksm.rst
+++ b/Documentation/admin-guide/mm/ksm.rst
@@ -159,6 +159,8 @@ stable_node_chains_prune_millisecs
The effectiveness of KSM and MADV_MERGEABLE is shown in ``/sys/kernel/mm/ksm/``:
+general_profit
+ how effective is KSM. The calculation is explained below.
pages_shared
how many shared pages are being used
pages_sharing
@@ -216,7 +218,8 @@ several times, which are unprofitable memory consumed.
ksm_rmap_items * sizeof(rmap_item).
where ksm_merging_pages is shown under the directory ``/proc/<pid>/``,
- and ksm_rmap_items is shown in ``/proc/<pid>/ksm_stat``.
+ and ksm_rmap_items is shown in ``/proc/<pid>/ksm_stat``. The process profit
+ is also shown in ``/proc/<pid>/ksm_stat`` as ksm_process_profit.
From the perspective of application, a high ratio of ``ksm_rmap_items`` to
``ksm_merging_pages`` means a bad madvise-applied policy, so developers or
@@ -227,6 +230,9 @@ so if the ``ksm_rmap_items/ksm_merging_pages`` ratio exceeds 64 on 64-bit CPU
or exceeds 128 on 32-bit CPU, then the app's madvise policy should be dropped,
because the ksm profit is approximately zero or negative.
+The ksm_merge_type in ``/proc/<pid>/ksm_stat`` shows the merge type of the
+process. Valid values are ``none``, ``madvise`` and ``process``.
+
Monitoring KSM events
=====================
diff --git a/fs/proc/base.c b/fs/proc/base.c
index ac9ebe972be0..45749051e53b 100644
--- a/fs/proc/base.c
+++ b/fs/proc/base.c
@@ -96,6 +96,7 @@
#include <linux/time_namespace.h>
#include <linux/resctrl.h>
#include <linux/cn_proc.h>
+#include <linux/ksm.h>
#include <trace/events/oom.h>
#include "internal.h"
#include "fd.h"
@@ -3199,6 +3200,7 @@ static int proc_pid_ksm_merging_pages(struct seq_file *m, struct pid_namespace *
return 0;
}
+
static int proc_pid_ksm_stat(struct seq_file *m, struct pid_namespace *ns,
struct pid *pid, struct task_struct *task)
{
@@ -3208,6 +3210,9 @@ static int proc_pid_ksm_stat(struct seq_file *m, struct pid_namespace *ns,
if (mm) {
seq_printf(m, "ksm_rmap_items %lu\n", mm->ksm_rmap_items);
seq_printf(m, "zero_pages_sharing %lu\n", mm->ksm_zero_pages_sharing);
+ seq_printf(m, "ksm_merging_pages %lu\n", mm->ksm_merging_pages);
+ seq_printf(m, "ksm_merge_type %s\n", ksm_merge_type(mm));
+ seq_printf(m, "ksm_process_profit %ld\n", ksm_process_profit(mm));
mmput(mm);
}
diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index d38a05a36298..d5f69f18ee5a 100644
--- a/include/linux/ksm.h
+++ b/include/linux/ksm.h
@@ -55,6 +55,11 @@ struct page *ksm_might_need_to_copy(struct page *page,
void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc);
void folio_migrate_ksm(struct folio *newfolio, struct folio *folio);
+#ifdef CONFIG_PROC_FS
+long ksm_process_profit(struct mm_struct *);
+const char *ksm_merge_type(struct mm_struct *mm);
+#endif /* CONFIG_PROC_FS */
+
#else /* !CONFIG_KSM */
static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
diff --git a/mm/ksm.c b/mm/ksm.c
index 23d6944f78ad..3121bc0f48f3 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -3024,6 +3024,25 @@ static void wait_while_offlining(void)
}
#endif /* CONFIG_MEMORY_HOTREMOVE */
+#ifdef CONFIG_PROC_FS
+long ksm_process_profit(struct mm_struct *mm)
+{
+ return (long)mm->ksm_merging_pages * PAGE_SIZE -
+ mm->ksm_rmap_items * sizeof(struct ksm_rmap_item);
+}
+
+/* Return merge type name as string. */
+const char *ksm_merge_type(struct mm_struct *mm)
+{
+ if (test_bit(MMF_VM_MERGE_ANY, &mm->flags))
+ return "process";
+ else if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
+ return "madvise";
+ else
+ return "none";
+}
+#endif /* CONFIG_PROC_FS */
+
#ifdef CONFIG_SYSFS
/*
* This all compiles without CONFIG_SYSFS, but is a waste of space.
@@ -3271,8 +3290,7 @@ static ssize_t pages_unshared_show(struct kobject *kobj,
}
KSM_ATTR_RO(pages_unshared);
-static ssize_t pages_volatile_show(struct kobject *kobj,
- struct kobj_attribute *attr, char *buf)
+static long pages_volatile(void)
{
long ksm_pages_volatile;
@@ -3284,7 +3302,14 @@ static ssize_t pages_volatile_show(struct kobject *kobj,
*/
if (ksm_pages_volatile < 0)
ksm_pages_volatile = 0;
- return sysfs_emit(buf, "%ld\n", ksm_pages_volatile);
+
+ return ksm_pages_volatile;
+}
+
+static ssize_t pages_volatile_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ return sysfs_emit(buf, "%ld\n", pages_volatile());
}
KSM_ATTR_RO(pages_volatile);
@@ -3295,6 +3320,21 @@ static ssize_t zero_pages_sharing_show(struct kobject *kobj,
}
KSM_ATTR_RO(zero_pages_sharing);
+static ssize_t general_profit_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ long general_profit;
+ long all_rmap_items;
+
+ all_rmap_items = ksm_max_page_sharing + ksm_pages_shared +
+ ksm_pages_unshared + pages_volatile();
+ general_profit = ksm_pages_sharing * PAGE_SIZE -
+ all_rmap_items * sizeof(struct ksm_rmap_item);
+
+ return sysfs_emit(buf, "%ld\n", general_profit);
+}
+KSM_ATTR_RO(general_profit);
+
static ssize_t stable_node_dups_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
@@ -3360,6 +3400,7 @@ static struct attribute *ksm_attrs[] = {
&stable_node_dups_attr.attr,
&stable_node_chains_prune_millisecs_attr.attr,
&use_zero_pages_attr.attr,
+ &general_profit_attr.attr,
NULL,
};
--
2.30.2
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH v3 3/3] selftests/mm: add new selftests for KSM
2023-02-24 4:39 [PATCH v3 0/3] mm: process/cgroup ksm support Stefan Roesch
2023-02-24 4:39 ` [PATCH v3 1/3] mm: add new api to enable ksm per process Stefan Roesch
2023-02-24 4:39 ` [PATCH v3 2/3] mm: add new KSM process and sysfs knobs Stefan Roesch
@ 2023-02-24 4:40 ` Stefan Roesch
2023-02-26 5:30 ` Andrew Morton
2023-02-26 5:08 ` [PATCH v3 0/3] mm: process/cgroup ksm support Andrew Morton
2023-03-08 17:01 ` David Hildenbrand
4 siblings, 1 reply; 17+ messages in thread
From: Stefan Roesch @ 2023-02-24 4:40 UTC (permalink / raw)
To: kernel-team
Cc: shr, linux-mm, riel, mhocko, david, linux-kselftest, linux-doc,
akpm, hannes
This adds three new tests to the selftests for KSM. These tests use the
new prctl API's to enable and disable KSM.
1) add new prctl flags to prctl header file in tools dir
This adds the new prctl flags to the include file prct.h in the tools
directory. This makes sure they are available for testing.
2) add KSM prctl merge test
This adds the -t option to the ksm_tests program. The -t flag allows to
specify if it should use madvise or prctl ksm merging.
3) add KSM get merge type test
This adds the -G flag to the ksm_tests program to query the KSM status with
prctl after KSM has been enabled with prctl.
4) add KSM fork test
Add fork test to verify that the MMF_VM_MERGE_ANY flag is inherited by
the child process.
5) add two functions for debugging merge outcome
This adds two functions to report the metrics in /proc/self/ksm_stat and
/sys/kernel/debug/mm/ksm.
The debugging can be enabled with the following command line:
make -C tools/testing/selftests TARGETS="vm" --keep-going \
EXTRA_CFLAGS=-DDEBUG=1
Signed-off-by: Stefan Roesch <shr@devkernel.io>
---
tools/include/uapi/linux/prctl.h | 2 +
tools/testing/selftests/mm/Makefile | 3 +-
tools/testing/selftests/mm/ksm_tests.c | 254 +++++++++++++++++++++----
3 files changed, 219 insertions(+), 40 deletions(-)
diff --git a/tools/include/uapi/linux/prctl.h b/tools/include/uapi/linux/prctl.h
index a5e06dcbba13..e4c629c1f1b0 100644
--- a/tools/include/uapi/linux/prctl.h
+++ b/tools/include/uapi/linux/prctl.h
@@ -284,4 +284,6 @@ struct prctl_mm_map {
#define PR_SET_VMA 0x53564d41
# define PR_SET_VMA_ANON_NAME 0
+#define PR_SET_MEMORY_MERGE 67
+#define PR_GET_MEMORY_MERGE 68
#endif /* _LINUX_PRCTL_H */
diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile
index d90cdc06aa59..507cb22bdebd 100644
--- a/tools/testing/selftests/mm/Makefile
+++ b/tools/testing/selftests/mm/Makefile
@@ -29,7 +29,8 @@ MACHINE ?= $(shell echo $(uname_M) | sed -e 's/aarch64.*/arm64/' -e 's/ppc64.*/p
# LDLIBS.
MAKEFLAGS += --no-builtin-rules
-CFLAGS = -Wall -I $(top_srcdir) -I $(top_srcdir)/usr/include $(EXTRA_CFLAGS) $(KHDR_INCLUDES)
+CFLAGS = -Wall -I $(top_srcdir)/tools/include/uapi
+CFLAGS += -I $(top_srcdir) -I $(top_srcdir)/usr/include $(EXTRA_CFLAGS) $(KHDR_INCLUDES)
LDLIBS = -lrt -lpthread
TEST_GEN_FILES = cow
TEST_GEN_FILES += compaction_test
diff --git a/tools/testing/selftests/mm/ksm_tests.c b/tools/testing/selftests/mm/ksm_tests.c
index f9eb4d67e0dd..9fb21b982dc9 100644
--- a/tools/testing/selftests/mm/ksm_tests.c
+++ b/tools/testing/selftests/mm/ksm_tests.c
@@ -1,6 +1,8 @@
// SPDX-License-Identifier: GPL-2.0
#include <sys/mman.h>
+#include <sys/prctl.h>
+#include <sys/wait.h>
#include <stdbool.h>
#include <time.h>
#include <string.h>
@@ -21,6 +23,7 @@
#define KSM_PROT_STR_DEFAULT "rw"
#define KSM_USE_ZERO_PAGES_DEFAULT false
#define KSM_MERGE_ACROSS_NODES_DEFAULT true
+#define KSM_MERGE_TYPE_DEFAULT 0
#define MB (1ul << 20)
struct ksm_sysfs {
@@ -33,9 +36,17 @@ struct ksm_sysfs {
unsigned long use_zero_pages;
};
+enum ksm_merge_type {
+ KSM_MERGE_MADVISE,
+ KSM_MERGE_PRCTL,
+ KSM_MERGE_LAST = KSM_MERGE_PRCTL
+};
+
enum ksm_test_name {
CHECK_KSM_MERGE,
+ CHECK_KSM_MERGE_FORK,
CHECK_KSM_UNMERGE,
+ CHECK_KSM_GET_MERGE_TYPE,
CHECK_KSM_ZERO_PAGE_MERGE,
CHECK_KSM_NUMA_MERGE,
KSM_MERGE_TIME,
@@ -82,6 +93,55 @@ static int ksm_read_sysfs(const char *file_path, unsigned long *val)
return 0;
}
+#ifdef DEBUG
+static void ksm_print_sysfs(void)
+{
+ unsigned long max_page_sharing, pages_sharing, pages_shared;
+ unsigned long full_scans, pages_unshared, pages_volatile;
+ unsigned long stable_node_chains, stable_node_dups;
+ long general_profit;
+
+ if (ksm_read_sysfs(KSM_FP("pages_shared"), &pages_shared) ||
+ ksm_read_sysfs(KSM_FP("pages_sharing"), &pages_sharing) ||
+ ksm_read_sysfs(KSM_FP("max_page_sharing"), &max_page_sharing) ||
+ ksm_read_sysfs(KSM_FP("full_scans"), &full_scans) ||
+ ksm_read_sysfs(KSM_FP("pages_unshared"), &pages_unshared) ||
+ ksm_read_sysfs(KSM_FP("pages_volatile"), &pages_volatile) ||
+ ksm_read_sysfs(KSM_FP("stable_node_chains"), &stable_node_chains) ||
+ ksm_read_sysfs(KSM_FP("stable_node_dups"), &stable_node_dups) ||
+ ksm_read_sysfs(KSM_FP("general_profit"), (unsigned long *)&general_profit))
+ return;
+
+ printf("pages_shared : %lu\n", pages_shared);
+ printf("pages_sharing : %lu\n", pages_sharing);
+ printf("max_page_sharing : %lu\n", max_page_sharing);
+ printf("full_scans : %lu\n", full_scans);
+ printf("pages_unshared : %lu\n", pages_unshared);
+ printf("pages_volatile : %lu\n", pages_volatile);
+ printf("stable_node_chains: %lu\n", stable_node_chains);
+ printf("stable_node_dups : %lu\n", stable_node_dups);
+ printf("general_profit : %ld\n", general_profit);
+}
+
+static void ksm_print_procfs(void)
+{
+ const char *file_name = "/proc/self/ksm_stat";
+ char buffer[512];
+ FILE *f = fopen(file_name, "r");
+
+ if (!f) {
+ fprintf(stderr, "f %s\n", file_name);
+ perror("fopen");
+ return;
+ }
+
+ while (fgets(buffer, sizeof(buffer), f))
+ printf("%s", buffer);
+
+ fclose(f);
+}
+#endif
+
static int str_to_prot(char *prot_str)
{
int prot = 0;
@@ -115,7 +175,9 @@ static void print_help(void)
" -D evaluate unmerging time and speed when disabling KSM.\n"
" For this test, the size of duplicated memory area (in MiB)\n"
" must be provided using -s option\n"
- " -C evaluate the time required to break COW of merged pages.\n\n");
+ " -C evaluate the time required to break COW of merged pages.\n"
+ " -G query merge mode\n"
+ " -F evaluate that the KSM process flag is inherited\n\n");
printf(" -a: specify the access protections of pages.\n"
" <prot> must be of the form [rwx].\n"
@@ -129,6 +191,10 @@ static void print_help(void)
printf(" -m: change merge_across_nodes tunable\n"
" Default: %d\n", KSM_MERGE_ACROSS_NODES_DEFAULT);
printf(" -s: the size of duplicated memory area (in MiB)\n");
+ printf(" -t: KSM merge type\n"
+ " Default: 0\n"
+ " 0: madvise merging\n"
+ " 1: prctl merging\n");
exit(0);
}
@@ -176,12 +242,21 @@ static int ksm_do_scan(int scan_count, struct timespec start_time, int timeout)
return 0;
}
-static int ksm_merge_pages(void *addr, size_t size, struct timespec start_time, int timeout)
+static int ksm_merge_pages(int merge_type, void *addr, size_t size,
+ struct timespec start_time, int timeout)
{
- if (madvise(addr, size, MADV_MERGEABLE)) {
- perror("madvise");
- return 1;
+ if (merge_type == KSM_MERGE_MADVISE) {
+ if (madvise(addr, size, MADV_MERGEABLE)) {
+ perror("madvise");
+ return 1;
+ }
+ } else if (merge_type == KSM_MERGE_PRCTL) {
+ if (prctl(PR_SET_MEMORY_MERGE, 1)) {
+ perror("prctl");
+ return 1;
+ }
}
+
if (ksm_write_sysfs(KSM_FP("run"), 1))
return 1;
@@ -211,6 +286,11 @@ static bool assert_ksm_pages_count(long dupl_page_count)
ksm_read_sysfs(KSM_FP("max_page_sharing"), &max_page_sharing))
return false;
+#ifdef DEBUG
+ ksm_print_sysfs();
+ ksm_print_procfs();
+#endif
+
/*
* Since there must be at least 2 pages for merging and 1 page can be
* shared with the limited number of pages (max_page_sharing), sometimes
@@ -266,7 +346,8 @@ static int ksm_restore(struct ksm_sysfs *ksm_sysfs)
return 0;
}
-static int check_ksm_merge(int mapping, int prot, long page_count, int timeout, size_t page_size)
+static int check_ksm_merge(int merge_type, int mapping, int prot,
+ long page_count, int timeout, size_t page_size)
{
void *map_ptr;
struct timespec start_time;
@@ -281,13 +362,16 @@ static int check_ksm_merge(int mapping, int prot, long page_count, int timeout,
if (!map_ptr)
return KSFT_FAIL;
- if (ksm_merge_pages(map_ptr, page_size * page_count, start_time, timeout))
+ if (ksm_merge_pages(merge_type, map_ptr, page_size * page_count, start_time, timeout))
goto err_out;
/* verify that the right number of pages are merged */
if (assert_ksm_pages_count(page_count)) {
printf("OK\n");
- munmap(map_ptr, page_size * page_count);
+ if (merge_type == KSM_MERGE_MADVISE)
+ munmap(map_ptr, page_size * page_count);
+ else if (merge_type == KSM_MERGE_PRCTL)
+ prctl(PR_SET_MEMORY_MERGE, 0);
return KSFT_PASS;
}
@@ -297,7 +381,73 @@ static int check_ksm_merge(int mapping, int prot, long page_count, int timeout,
return KSFT_FAIL;
}
-static int check_ksm_unmerge(int mapping, int prot, int timeout, size_t page_size)
+/* Verify that prctl ksm flag is inherited. */
+static int check_ksm_fork(void)
+{
+ int rc = KSFT_FAIL;
+ pid_t child_pid;
+
+ if (prctl(PR_SET_MEMORY_MERGE, 1)) {
+ perror("prctl");
+ return KSFT_FAIL;
+ }
+
+ child_pid = fork();
+ if (child_pid == 0) {
+ int is_on = prctl(PR_GET_MEMORY_MERGE, 0);
+
+ if (!is_on)
+ exit(KSFT_FAIL);
+
+ exit(KSFT_PASS);
+ }
+
+ if (child_pid < 0)
+ goto out;
+
+ if (waitpid(child_pid, &rc, 0) < 0)
+ rc = KSFT_FAIL;
+
+ if (prctl(PR_SET_MEMORY_MERGE, 0)) {
+ perror("prctl");
+ rc = KSFT_FAIL;
+ }
+
+out:
+ if (rc == KSFT_PASS)
+ printf("OK\n");
+ else
+ printf("Not OK\n");
+
+ return rc;
+}
+
+static int check_ksm_get_merge_type(void)
+{
+ if (prctl(PR_SET_MEMORY_MERGE, 1)) {
+ perror("prctl set");
+ return 1;
+ }
+
+ int is_on = prctl(PR_GET_MEMORY_MERGE, 0);
+
+ if (prctl(PR_SET_MEMORY_MERGE, 0)) {
+ perror("prctl set");
+ return 1;
+ }
+
+ int is_off = prctl(PR_GET_MEMORY_MERGE, 0);
+
+ if (is_on && is_off) {
+ printf("OK\n");
+ return KSFT_PASS;
+ }
+
+ printf("Not OK\n");
+ return KSFT_FAIL;
+}
+
+static int check_ksm_unmerge(int merge_type, int mapping, int prot, int timeout, size_t page_size)
{
void *map_ptr;
struct timespec start_time;
@@ -313,7 +463,7 @@ static int check_ksm_unmerge(int mapping, int prot, int timeout, size_t page_siz
if (!map_ptr)
return KSFT_FAIL;
- if (ksm_merge_pages(map_ptr, page_size * page_count, start_time, timeout))
+ if (ksm_merge_pages(merge_type, map_ptr, page_size * page_count, start_time, timeout))
goto err_out;
/* change 1 byte in each of the 2 pages -- KSM must automatically unmerge them */
@@ -337,8 +487,8 @@ static int check_ksm_unmerge(int mapping, int prot, int timeout, size_t page_siz
return KSFT_FAIL;
}
-static int check_ksm_zero_page_merge(int mapping, int prot, long page_count, int timeout,
- bool use_zero_pages, size_t page_size)
+static int check_ksm_zero_page_merge(int merge_type, int mapping, int prot, long page_count,
+ int timeout, bool use_zero_pages, size_t page_size)
{
void *map_ptr;
struct timespec start_time;
@@ -356,7 +506,7 @@ static int check_ksm_zero_page_merge(int mapping, int prot, long page_count, int
if (!map_ptr)
return KSFT_FAIL;
- if (ksm_merge_pages(map_ptr, page_size * page_count, start_time, timeout))
+ if (ksm_merge_pages(merge_type, map_ptr, page_size * page_count, start_time, timeout))
goto err_out;
/*
@@ -402,8 +552,8 @@ static int get_first_mem_node(void)
return get_next_mem_node(numa_max_node());
}
-static int check_ksm_numa_merge(int mapping, int prot, int timeout, bool merge_across_nodes,
- size_t page_size)
+static int check_ksm_numa_merge(int merge_type, int mapping, int prot, int timeout,
+ bool merge_across_nodes, size_t page_size)
{
void *numa1_map_ptr, *numa2_map_ptr;
struct timespec start_time;
@@ -439,8 +589,8 @@ static int check_ksm_numa_merge(int mapping, int prot, int timeout, bool merge_a
memset(numa2_map_ptr, '*', page_size);
/* try to merge the pages */
- if (ksm_merge_pages(numa1_map_ptr, page_size, start_time, timeout) ||
- ksm_merge_pages(numa2_map_ptr, page_size, start_time, timeout))
+ if (ksm_merge_pages(merge_type, numa1_map_ptr, page_size, start_time, timeout) ||
+ ksm_merge_pages(merge_type, numa2_map_ptr, page_size, start_time, timeout))
goto err_out;
/*
@@ -466,7 +616,8 @@ static int check_ksm_numa_merge(int mapping, int prot, int timeout, bool merge_a
return KSFT_FAIL;
}
-static int ksm_merge_hugepages_time(int mapping, int prot, int timeout, size_t map_size)
+static int ksm_merge_hugepages_time(int merge_type, int mapping, int prot,
+ int timeout, size_t map_size)
{
void *map_ptr, *map_ptr_orig;
struct timespec start_time, end_time;
@@ -508,7 +659,7 @@ static int ksm_merge_hugepages_time(int mapping, int prot, int timeout, size_t m
perror("clock_gettime");
goto err_out;
}
- if (ksm_merge_pages(map_ptr, map_size, start_time, timeout))
+ if (ksm_merge_pages(merge_type, map_ptr, map_size, start_time, timeout))
goto err_out;
if (clock_gettime(CLOCK_MONOTONIC_RAW, &end_time)) {
perror("clock_gettime");
@@ -533,7 +684,7 @@ static int ksm_merge_hugepages_time(int mapping, int prot, int timeout, size_t m
return KSFT_FAIL;
}
-static int ksm_merge_time(int mapping, int prot, int timeout, size_t map_size)
+static int ksm_merge_time(int merge_type, int mapping, int prot, int timeout, size_t map_size)
{
void *map_ptr;
struct timespec start_time, end_time;
@@ -549,7 +700,7 @@ static int ksm_merge_time(int mapping, int prot, int timeout, size_t map_size)
perror("clock_gettime");
goto err_out;
}
- if (ksm_merge_pages(map_ptr, map_size, start_time, timeout))
+ if (ksm_merge_pages(merge_type, map_ptr, map_size, start_time, timeout))
goto err_out;
if (clock_gettime(CLOCK_MONOTONIC_RAW, &end_time)) {
perror("clock_gettime");
@@ -574,7 +725,7 @@ static int ksm_merge_time(int mapping, int prot, int timeout, size_t map_size)
return KSFT_FAIL;
}
-static int ksm_unmerge_time(int mapping, int prot, int timeout, size_t map_size)
+static int ksm_unmerge_time(int merge_type, int mapping, int prot, int timeout, size_t map_size)
{
void *map_ptr;
struct timespec start_time, end_time;
@@ -589,7 +740,7 @@ static int ksm_unmerge_time(int mapping, int prot, int timeout, size_t map_size)
perror("clock_gettime");
goto err_out;
}
- if (ksm_merge_pages(map_ptr, map_size, start_time, timeout))
+ if (ksm_merge_pages(merge_type, map_ptr, map_size, start_time, timeout))
goto err_out;
if (clock_gettime(CLOCK_MONOTONIC_RAW, &start_time)) {
@@ -621,7 +772,7 @@ static int ksm_unmerge_time(int mapping, int prot, int timeout, size_t map_size)
return KSFT_FAIL;
}
-static int ksm_cow_time(int mapping, int prot, int timeout, size_t page_size)
+static int ksm_cow_time(int merge_type, int mapping, int prot, int timeout, size_t page_size)
{
void *map_ptr;
struct timespec start_time, end_time;
@@ -660,7 +811,7 @@ static int ksm_cow_time(int mapping, int prot, int timeout, size_t page_size)
memset(map_ptr + page_size * i, '+', i / 2 + 1);
memset(map_ptr + page_size * (i + 1), '+', i / 2 + 1);
}
- if (ksm_merge_pages(map_ptr, page_size * page_count, start_time, timeout))
+ if (ksm_merge_pages(merge_type, map_ptr, page_size * page_count, start_time, timeout))
goto err_out;
if (clock_gettime(CLOCK_MONOTONIC_RAW, &start_time)) {
@@ -697,6 +848,7 @@ int main(int argc, char *argv[])
int ret, opt;
int prot = 0;
int ksm_scan_limit_sec = KSM_SCAN_LIMIT_SEC_DEFAULT;
+ int merge_type = KSM_MERGE_TYPE_DEFAULT;
long page_count = KSM_PAGE_COUNT_DEFAULT;
size_t page_size = sysconf(_SC_PAGESIZE);
struct ksm_sysfs ksm_sysfs_old;
@@ -705,7 +857,7 @@ int main(int argc, char *argv[])
bool merge_across_nodes = KSM_MERGE_ACROSS_NODES_DEFAULT;
long size_MB = 0;
- while ((opt = getopt(argc, argv, "ha:p:l:z:m:s:MUZNPCHD")) != -1) {
+ while ((opt = getopt(argc, argv, "ha:p:l:z:m:s:t:FGMUZNPCHD")) != -1) {
switch (opt) {
case 'a':
prot = str_to_prot(optarg);
@@ -745,6 +897,20 @@ int main(int argc, char *argv[])
printf("Size must be greater than 0\n");
return KSFT_FAIL;
}
+ case 't':
+ {
+ int tmp = atoi(optarg);
+
+ if (tmp < 0 || tmp > KSM_MERGE_LAST) {
+ printf("Invalid merge type\n");
+ return KSFT_FAIL;
+ }
+ merge_type = atoi(optarg);
+ }
+ break;
+ case 'F':
+ test_name = CHECK_KSM_MERGE_FORK;
+ break;
case 'M':
break;
case 'U':
@@ -753,6 +919,9 @@ int main(int argc, char *argv[])
case 'Z':
test_name = CHECK_KSM_ZERO_PAGE_MERGE;
break;
+ case 'G':
+ test_name = CHECK_KSM_GET_MERGE_TYPE;
+ break;
case 'N':
test_name = CHECK_KSM_NUMA_MERGE;
break;
@@ -795,35 +964,42 @@ int main(int argc, char *argv[])
switch (test_name) {
case CHECK_KSM_MERGE:
- ret = check_ksm_merge(MAP_PRIVATE | MAP_ANONYMOUS, prot, page_count,
+ ret = check_ksm_merge(merge_type, MAP_PRIVATE | MAP_ANONYMOUS, prot, page_count,
ksm_scan_limit_sec, page_size);
break;
+ case CHECK_KSM_MERGE_FORK:
+ ret = check_ksm_fork();
+ break;
case CHECK_KSM_UNMERGE:
- ret = check_ksm_unmerge(MAP_PRIVATE | MAP_ANONYMOUS, prot, ksm_scan_limit_sec,
- page_size);
+ ret = check_ksm_unmerge(merge_type, MAP_PRIVATE | MAP_ANONYMOUS, prot,
+ ksm_scan_limit_sec, page_size);
+ break;
+ case CHECK_KSM_GET_MERGE_TYPE:
+ ret = check_ksm_get_merge_type();
break;
case CHECK_KSM_ZERO_PAGE_MERGE:
- ret = check_ksm_zero_page_merge(MAP_PRIVATE | MAP_ANONYMOUS, prot, page_count,
- ksm_scan_limit_sec, use_zero_pages, page_size);
+ ret = check_ksm_zero_page_merge(merge_type, MAP_PRIVATE | MAP_ANONYMOUS, prot,
+ page_count, ksm_scan_limit_sec, use_zero_pages,
+ page_size);
break;
case CHECK_KSM_NUMA_MERGE:
- ret = check_ksm_numa_merge(MAP_PRIVATE | MAP_ANONYMOUS, prot, ksm_scan_limit_sec,
- merge_across_nodes, page_size);
+ ret = check_ksm_numa_merge(merge_type, MAP_PRIVATE | MAP_ANONYMOUS, prot,
+ ksm_scan_limit_sec, merge_across_nodes, page_size);
break;
case KSM_MERGE_TIME:
if (size_MB == 0) {
printf("Option '-s' is required.\n");
return KSFT_FAIL;
}
- ret = ksm_merge_time(MAP_PRIVATE | MAP_ANONYMOUS, prot, ksm_scan_limit_sec,
- size_MB);
+ ret = ksm_merge_time(merge_type, MAP_PRIVATE | MAP_ANONYMOUS, prot,
+ ksm_scan_limit_sec, size_MB);
break;
case KSM_MERGE_TIME_HUGE_PAGES:
if (size_MB == 0) {
printf("Option '-s' is required.\n");
return KSFT_FAIL;
}
- ret = ksm_merge_hugepages_time(MAP_PRIVATE | MAP_ANONYMOUS, prot,
+ ret = ksm_merge_hugepages_time(merge_type, MAP_PRIVATE | MAP_ANONYMOUS, prot,
ksm_scan_limit_sec, size_MB);
break;
case KSM_UNMERGE_TIME:
@@ -831,12 +1007,12 @@ int main(int argc, char *argv[])
printf("Option '-s' is required.\n");
return KSFT_FAIL;
}
- ret = ksm_unmerge_time(MAP_PRIVATE | MAP_ANONYMOUS, prot,
+ ret = ksm_unmerge_time(merge_type, MAP_PRIVATE | MAP_ANONYMOUS, prot,
ksm_scan_limit_sec, size_MB);
break;
case KSM_COW_TIME:
- ret = ksm_cow_time(MAP_PRIVATE | MAP_ANONYMOUS, prot, ksm_scan_limit_sec,
- page_size);
+ ret = ksm_cow_time(merge_type, MAP_PRIVATE | MAP_ANONYMOUS, prot,
+ ksm_scan_limit_sec, page_size);
break;
}
--
2.30.2
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH v3 0/3] mm: process/cgroup ksm support
2023-02-24 4:39 [PATCH v3 0/3] mm: process/cgroup ksm support Stefan Roesch
` (2 preceding siblings ...)
2023-02-24 4:40 ` [PATCH v3 3/3] selftests/mm: add new selftests for KSM Stefan Roesch
@ 2023-02-26 5:08 ` Andrew Morton
2023-02-27 17:13 ` Stefan Roesch
2023-03-07 18:48 ` Stefan Roesch
2023-03-08 17:01 ` David Hildenbrand
4 siblings, 2 replies; 17+ messages in thread
From: Andrew Morton @ 2023-02-26 5:08 UTC (permalink / raw)
To: Stefan Roesch
Cc: kernel-team, linux-mm, riel, mhocko, david, linux-kselftest,
linux-doc, hannes
On Thu, 23 Feb 2023 20:39:57 -0800 Stefan Roesch <shr@devkernel.io> wrote:
> So far KSM can only be enabled by calling madvise for memory regions. To
> be able to use KSM for more workloads, KSM needs to have the ability to be
> enabled / disabled at the process / cgroup level.
I'll toss this in for integration and testing, but I'd like to see
reviewer input before proceeding further.
Please plan on adding suitable user-facing documentation? Presumably a
patch for the prctl manpage?
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v3 3/3] selftests/mm: add new selftests for KSM
2023-02-24 4:40 ` [PATCH v3 3/3] selftests/mm: add new selftests for KSM Stefan Roesch
@ 2023-02-26 5:30 ` Andrew Morton
2023-02-27 17:19 ` Stefan Roesch
0 siblings, 1 reply; 17+ messages in thread
From: Andrew Morton @ 2023-02-26 5:30 UTC (permalink / raw)
To: Stefan Roesch
Cc: kernel-team, linux-mm, riel, mhocko, david, linux-kselftest,
linux-doc, hannes, Mathieu Desnoyers
On Thu, 23 Feb 2023 20:40:00 -0800 Stefan Roesch <shr@devkernel.io> wrote:
> This adds three new tests to the selftests for KSM. These tests use the
> new prctl API's to enable and disable KSM.
>
> ...
>
> diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile
> index d90cdc06aa59..507cb22bdebd 100644
> --- a/tools/testing/selftests/mm/Makefile
> +++ b/tools/testing/selftests/mm/Makefile
> @@ -29,7 +29,8 @@ MACHINE ?= $(shell echo $(uname_M) | sed -e 's/aarch64.*/arm64/' -e 's/ppc64.*/p
> # LDLIBS.
> MAKEFLAGS += --no-builtin-rules
>
> -CFLAGS = -Wall -I $(top_srcdir) -I $(top_srcdir)/usr/include $(EXTRA_CFLAGS) $(KHDR_INCLUDES)
> +CFLAGS = -Wall -I $(top_srcdir)/tools/include/uapi
> +CFLAGS += -I $(top_srcdir) -I $(top_srcdir)/usr/include $(EXTRA_CFLAGS) $(KHDR_INCLUDES)
> LDLIBS = -lrt -lpthread
> TEST_GEN_FILES = cow
> TEST_GEN_FILES += compaction_test
This change runs afoul of the recently merged 8eb3751c73bec
("selftests: vm: Fix incorrect kernel headers search path").
I did this:
--- a/tools/testing/selftests/mm/Makefile~selftests-mm-add-new-selftests-for-ksm
+++ b/tools/testing/selftests/mm/Makefile
@@ -29,7 +29,7 @@ MACHINE ?= $(shell echo $(uname_M) | sed -e 's/aarch64.*/arm64/' -e 's/ppc64.*/p
# LDLIBS.
MAKEFLAGS += --no-builtin-rules
-CFLAGS = -Wall -I $(top_srcdir) $(EXTRA_CFLAGS) $(KHDR_INCLUDES)
+CFLAGS = -Wall -I $(top_srcdir) -I $(top_srcdir)/tools/include/uapi $(EXTRA_CFLAGS) $(KHDR_INCLUDES)
LDLIBS = -lrt -lpthread
TEST_GEN_FILES = cow
TEST_GEN_FILES += compaction_test
_
But I expect it's a bit wrong. Please check?
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v3 0/3] mm: process/cgroup ksm support
2023-02-26 5:08 ` [PATCH v3 0/3] mm: process/cgroup ksm support Andrew Morton
@ 2023-02-27 17:13 ` Stefan Roesch
2023-03-07 18:48 ` Stefan Roesch
1 sibling, 0 replies; 17+ messages in thread
From: Stefan Roesch @ 2023-02-27 17:13 UTC (permalink / raw)
To: Andrew Morton
Cc: kernel-team, linux-mm, riel, mhocko, david, linux-kselftest,
linux-doc, hannes
Andrew Morton <akpm@linux-foundation.org> writes:
> On Thu, 23 Feb 2023 20:39:57 -0800 Stefan Roesch <shr@devkernel.io> wrote:
>
>> So far KSM can only be enabled by calling madvise for memory regions. To
>> be able to use KSM for more workloads, KSM needs to have the ability to be
>> enabled / disabled at the process / cgroup level.
>
> I'll toss this in for integration and testing, but I'd like to see
> reviewer input before proceeding further.
>
> Please plan on adding suitable user-facing documentation? Presumably a
> patch for the prctl manpage?
I'll work on a patch for the prctl manpage.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v3 3/3] selftests/mm: add new selftests for KSM
2023-02-26 5:30 ` Andrew Morton
@ 2023-02-27 17:19 ` Stefan Roesch
2023-02-27 17:24 ` Mathieu Desnoyers
0 siblings, 1 reply; 17+ messages in thread
From: Stefan Roesch @ 2023-02-27 17:19 UTC (permalink / raw)
To: Andrew Morton
Cc: kernel-team, linux-mm, riel, mhocko, david, linux-kselftest,
linux-doc, hannes, Mathieu Desnoyers
Andrew Morton <akpm@linux-foundation.org> writes:
> On Thu, 23 Feb 2023 20:40:00 -0800 Stefan Roesch <shr@devkernel.io> wrote:
>
>> This adds three new tests to the selftests for KSM. These tests use the
>> new prctl API's to enable and disable KSM.
>>
>> ...
>>
>> diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile
>> index d90cdc06aa59..507cb22bdebd 100644
>> --- a/tools/testing/selftests/mm/Makefile
>> +++ b/tools/testing/selftests/mm/Makefile
>> @@ -29,7 +29,8 @@ MACHINE ?= $(shell echo $(uname_M) | sed -e 's/aarch64.*/arm64/' -e 's/ppc64.*/p
>> # LDLIBS.
>> MAKEFLAGS += --no-builtin-rules
>>
>> -CFLAGS = -Wall -I $(top_srcdir) -I $(top_srcdir)/usr/include $(EXTRA_CFLAGS) $(KHDR_INCLUDES)
>> +CFLAGS = -Wall -I $(top_srcdir)/tools/include/uapi
>> +CFLAGS += -I $(top_srcdir) -I $(top_srcdir)/usr/include $(EXTRA_CFLAGS) $(KHDR_INCLUDES)
>> LDLIBS = -lrt -lpthread
>> TEST_GEN_FILES = cow
>> TEST_GEN_FILES += compaction_test
>
> This change runs afoul of the recently merged 8eb3751c73bec
> ("selftests: vm: Fix incorrect kernel headers search path").
>
> I did this:
>
> --- a/tools/testing/selftests/mm/Makefile~selftests-mm-add-new-selftests-for-ksm
> +++ b/tools/testing/selftests/mm/Makefile
> @@ -29,7 +29,7 @@ MACHINE ?= $(shell echo $(uname_M) | sed -e 's/aarch64.*/arm64/' -e 's/ppc64.*/p
> # LDLIBS.
> MAKEFLAGS += --no-builtin-rules
>
> -CFLAGS = -Wall -I $(top_srcdir) $(EXTRA_CFLAGS) $(KHDR_INCLUDES)
> +CFLAGS = -Wall -I $(top_srcdir) -I $(top_srcdir)/tools/include/uapi $(EXTRA_CFLAGS) $(KHDR_INCLUDES)
> LDLIBS = -lrt -lpthread
> TEST_GEN_FILES = cow
> TEST_GEN_FILES += compaction_test
> _
>
> But I expect it's a bit wrong. Please check?
This change looks good.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v3 3/3] selftests/mm: add new selftests for KSM
2023-02-27 17:19 ` Stefan Roesch
@ 2023-02-27 17:24 ` Mathieu Desnoyers
0 siblings, 0 replies; 17+ messages in thread
From: Mathieu Desnoyers @ 2023-02-27 17:24 UTC (permalink / raw)
To: Stefan Roesch, Andrew Morton
Cc: kernel-team, linux-mm, riel, mhocko, david, linux-kselftest,
linux-doc, hannes
On 2023-02-27 12:19, Stefan Roesch wrote:
>
> Andrew Morton <akpm@linux-foundation.org> writes:
>
>> On Thu, 23 Feb 2023 20:40:00 -0800 Stefan Roesch <shr@devkernel.io> wrote:
>>
>>> This adds three new tests to the selftests for KSM. These tests use the
>>> new prctl API's to enable and disable KSM.
>>>
>>> ...
>>>
>>> diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile
>>> index d90cdc06aa59..507cb22bdebd 100644
>>> --- a/tools/testing/selftests/mm/Makefile
>>> +++ b/tools/testing/selftests/mm/Makefile
>>> @@ -29,7 +29,8 @@ MACHINE ?= $(shell echo $(uname_M) | sed -e 's/aarch64.*/arm64/' -e 's/ppc64.*/p
>>> # LDLIBS.
>>> MAKEFLAGS += --no-builtin-rules
>>>
>>> -CFLAGS = -Wall -I $(top_srcdir) -I $(top_srcdir)/usr/include $(EXTRA_CFLAGS) $(KHDR_INCLUDES)
>>> +CFLAGS = -Wall -I $(top_srcdir)/tools/include/uapi
>>> +CFLAGS += -I $(top_srcdir) -I $(top_srcdir)/usr/include $(EXTRA_CFLAGS) $(KHDR_INCLUDES)
>>> LDLIBS = -lrt -lpthread
>>> TEST_GEN_FILES = cow
>>> TEST_GEN_FILES += compaction_test
>>
>> This change runs afoul of the recently merged 8eb3751c73bec
>> ("selftests: vm: Fix incorrect kernel headers search path").
>>
>> I did this:
>>
>> --- a/tools/testing/selftests/mm/Makefile~selftests-mm-add-new-selftests-for-ksm
>> +++ b/tools/testing/selftests/mm/Makefile
>> @@ -29,7 +29,7 @@ MACHINE ?= $(shell echo $(uname_M) | sed -e 's/aarch64.*/arm64/' -e 's/ppc64.*/p
>> # LDLIBS.
>> MAKEFLAGS += --no-builtin-rules
>>
>> -CFLAGS = -Wall -I $(top_srcdir) $(EXTRA_CFLAGS) $(KHDR_INCLUDES)
>> +CFLAGS = -Wall -I $(top_srcdir) -I $(top_srcdir)/tools/include/uapi $(EXTRA_CFLAGS) $(KHDR_INCLUDES)
>> LDLIBS = -lrt -lpthread
>> TEST_GEN_FILES = cow
>> TEST_GEN_FILES += compaction_test
>> _
>>
>> But I expect it's a bit wrong. Please check?
> This change looks good.
As the content of tools/include/uapi is indeed part of the kernel
sources and not generated as the result of 'make headers' in an output
directory which can be overridden by O=<...>, referring to it from
$(top_srcdir) seems appropriate.
lgtm
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v3 0/3] mm: process/cgroup ksm support
2023-02-26 5:08 ` [PATCH v3 0/3] mm: process/cgroup ksm support Andrew Morton
2023-02-27 17:13 ` Stefan Roesch
@ 2023-03-07 18:48 ` Stefan Roesch
1 sibling, 0 replies; 17+ messages in thread
From: Stefan Roesch @ 2023-03-07 18:48 UTC (permalink / raw)
To: Andrew Morton
Cc: kernel-team, linux-mm, riel, mhocko, david, linux-kselftest,
linux-doc, hannes
Andrew Morton <akpm@linux-foundation.org> writes:
> On Thu, 23 Feb 2023 20:39:57 -0800 Stefan Roesch <shr@devkernel.io> wrote:
>
>> So far KSM can only be enabled by calling madvise for memory regions. To
>> be able to use KSM for more workloads, KSM needs to have the ability to be
>> enabled / disabled at the process / cgroup level.
>
> I'll toss this in for integration and testing, but I'd like to see
> reviewer input before proceeding further.
>
> Please plan on adding suitable user-facing documentation? Presumably a
> patch for the prctl manpage?
The doc patch has been posted:
https://lore.kernel.org/linux-man/20230227220206.436662-1-shr@devkernel.io/
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v3 1/3] mm: add new api to enable ksm per process
2023-02-24 4:39 ` [PATCH v3 1/3] mm: add new api to enable ksm per process Stefan Roesch
@ 2023-03-08 16:47 ` Johannes Weiner
2023-03-08 22:16 ` Stefan Roesch
0 siblings, 1 reply; 17+ messages in thread
From: Johannes Weiner @ 2023-03-08 16:47 UTC (permalink / raw)
To: Stefan Roesch
Cc: kernel-team, linux-mm, riel, mhocko, david, linux-kselftest,
linux-doc, akpm
On Thu, Feb 23, 2023 at 08:39:58PM -0800, Stefan Roesch wrote:
> This adds a new prctl to API to enable and disable KSM on a per process
> basis instead of only at the VMA basis (with madvise).
>
> 1) Introduce new MMF_VM_MERGE_ANY flag
>
> This introduces the new flag MMF_VM_MERGE_ANY flag. When this flag is
> set, kernel samepage merging (ksm) gets enabled for all vma's of a
> process.
>
> 2) add flag to __ksm_enter
>
> This change adds the flag parameter to __ksm_enter. This allows to
> distinguish if ksm was called by prctl or madvise.
>
> 3) add flag to __ksm_exit call
>
> This adds the flag parameter to the __ksm_exit() call. This allows to
> distinguish if this call is for an prctl or madvise invocation.
>
> 4) invoke madvise for all vmas in scan_get_next_rmap_item
>
> If the new flag MMF_VM_MERGE_ANY has been set for a process, iterate
> over all the vmas and enable ksm if possible. For the vmas that can be
> ksm enabled this is only done once.
>
> 5) support disabling of ksm for a process
>
> This adds the ability to disable ksm for a process if ksm has been
> enabled for the process.
>
> 6) add new prctl option to get and set ksm for a process
>
> This adds two new options to the prctl system call
> - enable ksm for all vmas of a process (if the vmas support it).
> - query if ksm has been enabled for a process.
>
> Signed-off-by: Stefan Roesch <shr@devkernel.io>
Hey Stefan, thanks for merging the patches into one. I found it much
easier to review.
Overall this looks straight-forward to me. A few comments below:
> @@ -2659,6 +2660,34 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
> case PR_SET_VMA:
> error = prctl_set_vma(arg2, arg3, arg4, arg5);
> break;
> +#ifdef CONFIG_KSM
> + case PR_SET_MEMORY_MERGE:
> + if (!capable(CAP_SYS_RESOURCE))
> + return -EPERM;
> +
> + if (arg2) {
> + if (mmap_write_lock_killable(me->mm))
> + return -EINTR;
> +
> + if (test_bit(MMF_VM_MERGEABLE, &me->mm->flags))
> + error = -EINVAL;
So if the workload has already madvised specific VMAs the
process-enablement will fail. Why is that? Shouldn't it be possible to
override a local decision from an outside context that has more
perspective on both sharing opportunities and security aspects?
If there is a good reason for it, the -EINVAL should be addressed in
the manpage. And maybe add a comment here as well.
> + else if (!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags))
> + error = __ksm_enter(me->mm, MMF_VM_MERGE_ANY);
> + mmap_write_unlock(me->mm);
> + } else {
> + __ksm_exit(me->mm, MMF_VM_MERGE_ANY);
> + }
> + break;
> + case PR_GET_MEMORY_MERGE:
> + if (!capable(CAP_SYS_RESOURCE))
> + return -EPERM;
> +
> + if (arg2 || arg3 || arg4 || arg5)
> + return -EINVAL;
> +
> + error = !!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags);
> + break;
> +#endif
> default:
> error = -EINVAL;
> break;
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 56808e3bfd19..23d6944f78ad 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -1063,6 +1063,7 @@ static int unmerge_and_remove_all_rmap_items(void)
>
> mm_slot_free(mm_slot_cache, mm_slot);
> clear_bit(MMF_VM_MERGEABLE, &mm->flags);
> + clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
> mmdrop(mm);
> } else
> spin_unlock(&ksm_mmlist_lock);
> @@ -2329,6 +2330,17 @@ static struct ksm_rmap_item *get_next_rmap_item(struct ksm_mm_slot *mm_slot,
> return rmap_item;
> }
>
> +static bool vma_ksm_mergeable(struct vm_area_struct *vma)
> +{
> + if (vma->vm_flags & VM_MERGEABLE)
> + return true;
> +
> + if (test_bit(MMF_VM_MERGE_ANY, &vma->vm_mm->flags))
> + return true;
> +
> + return false;
> +}
> +
> static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
> {
> struct mm_struct *mm;
> @@ -2405,8 +2417,20 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
> goto no_vmas;
>
> for_each_vma(vmi, vma) {
> - if (!(vma->vm_flags & VM_MERGEABLE))
> + if (!vma_ksm_mergeable(vma))
> continue;
> + if (!(vma->vm_flags & VM_MERGEABLE)) {
IMO, the helper obscures the interaction between the vma flag and the
per-process flag here. How about:
if (!(vma->vm_flags & VM_MERGEABLE)) {
if (!test_bit(MMF_VM_MERGE_ANY, &vma->vm_mm->flags))
continue;
/*
* With per-process merging enabled, have the MM scan
* enroll any existing and new VMAs on the fly.
*
ksm_madvise();
}
> + unsigned long flags = vma->vm_flags;
> +
> + /* madvise failed, use next vma */
> + if (ksm_madvise(vma, vma->vm_start, vma->vm_end, MADV_MERGEABLE, &flags))
> + continue;
> + /* vma, not supported as being mergeable */
> + if (!(flags & VM_MERGEABLE))
> + continue;
> +
> + vm_flags_set(vma, VM_MERGEABLE);
I don't understand the local flags. Can't it pass &vma->vm_flags to
ksm_madvise()? It'll set VM_MERGEABLE on success. And you know it
wasn't set before because the whole thing is inside the !set
branch. The return value doesn't seem super useful, it's only the flag
setting that matters:
ksm_madvise(vma, vma->vm_start, vma->vm_end, MADV_MERGEABLE, &vma->vm_flags);
/* madvise can fail, and will skip special vmas (pfnmaps and such) */
if (!(vma->vm_flags & VM_MERGEABLE))
continue;
> + }
> if (ksm_scan.address < vma->vm_start)
> ksm_scan.address = vma->vm_start;
> if (!vma->anon_vma)
> @@ -2491,6 +2515,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
>
> mm_slot_free(mm_slot_cache, mm_slot);
> clear_bit(MMF_VM_MERGEABLE, &mm->flags);
> + clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
> mmap_read_unlock(mm);
> mmdrop(mm);
> } else {
> @@ -2664,12 +2690,39 @@ int __ksm_enter(struct mm_struct *mm)
> return 0;
> }
>
> -void __ksm_exit(struct mm_struct *mm)
> +static void unmerge_vmas(struct mm_struct *mm)
> +{
> + struct vm_area_struct *vma;
> + struct vma_iterator vmi;
> +
> + vma_iter_init(&vmi, mm, 0);
> +
> + mmap_read_lock(mm);
> + for_each_vma(vmi, vma) {
> + if (vma->vm_flags & VM_MERGEABLE) {
> + unsigned long flags = vma->vm_flags;
> +
> + if (ksm_madvise(vma, vma->vm_start, vma->vm_end, MADV_UNMERGEABLE, &flags))
> + continue;
> +
> + vm_flags_clear(vma, VM_MERGEABLE);
ksm_madvise() tests and clears VM_MERGEABLE, so AFAICS
for_each_vma(vmi, vma)
ksm_madvise();
should do it...
> + }
> + }
> + mmap_read_unlock(mm);
> +}
> +
> +void __ksm_exit(struct mm_struct *mm, int flag)
> {
> struct ksm_mm_slot *mm_slot;
> struct mm_slot *slot;
> int easy_to_free = 0;
>
> + if (!(current->flags & PF_EXITING) && flag == MMF_VM_MERGE_ANY &&
> + test_bit(MMF_VM_MERGE_ANY, &mm->flags)) {
> + clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
> + unmerge_vmas(mm);
...and then it's short enough to just open-code it here and drop the
unmerge_vmas() helper.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v3 0/3] mm: process/cgroup ksm support
2023-02-24 4:39 [PATCH v3 0/3] mm: process/cgroup ksm support Stefan Roesch
` (3 preceding siblings ...)
2023-02-26 5:08 ` [PATCH v3 0/3] mm: process/cgroup ksm support Andrew Morton
@ 2023-03-08 17:01 ` David Hildenbrand
2023-03-08 17:30 ` Johannes Weiner
4 siblings, 1 reply; 17+ messages in thread
From: David Hildenbrand @ 2023-03-08 17:01 UTC (permalink / raw)
To: Stefan Roesch, kernel-team
Cc: linux-mm, riel, mhocko, linux-kselftest, linux-doc, akpm, hannes
For some reason gmail thought it would be a good ideas to move this into
the SPAM folder, so I only saw the recent replies just now.
I'm going to have a look at this soonish.
One point that popped up in the past and that I raised on the last RFC:
we should think about letting processes *opt out/disable* KSM on their
own. Either completely, or for selected VMAs.
Reasoning is, that if you have an application that really doesn't want
some memory regions to be applicable to KSM (memory de-duplication
attacks? Knowing that KSM on some regions will be counter-productive)
For example, remembering if MADV_UNMERGEABLE was called and not only
clearing the VMA flag. So even if KSM would be force-enabled by some
tooling after the process started, such regions would not get considered
for KSM.
It would a bit like how we handle THP.
On 24.02.23 05:39, Stefan Roesch wrote:
> So far KSM can only be enabled by calling madvise for memory regions. To
> be able to use KSM for more workloads, KSM needs to have the ability to be
> enabled / disabled at the process / cgroup level.
>
> Use case 1:
> The madvise call is not available in the programming language. An example for
> this are programs with forked workloads using a garbage collected language without
> pointers. In such a language madvise cannot be made available.
>
> In addition the addresses of objects get moved around as they are garbage
> collected. KSM sharing needs to be enabled "from the outside" for these type of
> workloads.
>
> Use case 2:
> The same interpreter can also be used for workloads where KSM brings no
> benefit or even has overhead. We'd like to be able to enable KSM on a workload
> by workload basis.
>
> Use case 3:
> With the madvise call sharing opportunities are only enabled for the current
> process: it is a workload-local decision. A considerable number of sharing
> opportuniites may exist across multiple workloads or jobs. Only a higler level
> entity like a job scheduler or container can know for certain if its running
> one or more instances of a job. That job scheduler however doesn't have
> the necessary internal worklaod knowledge to make targeted madvise calls.
>
> Security concerns:
> In previous discussions security concerns have been brought up. The problem is
> that an individual workload does not have the knowledge about what else is
> running on a machine. Therefore it has to be very conservative in what memory
> areas can be shared or not. However, if the system is dedicated to running
> multiple jobs within the same security domain, its the job scheduler that has
> the knowledge that sharing can be safely enabled and is even desirable.
Note that there are some papers about why limiting memory deduplciation
attacks to single security domains is not sufficient. Especially, the
remote deduplication attacks fall into that category IIRC.
--
Thanks,
David / dhildenb
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v3 0/3] mm: process/cgroup ksm support
2023-03-08 17:01 ` David Hildenbrand
@ 2023-03-08 17:30 ` Johannes Weiner
2023-03-08 18:41 ` David Hildenbrand
0 siblings, 1 reply; 17+ messages in thread
From: Johannes Weiner @ 2023-03-08 17:30 UTC (permalink / raw)
To: David Hildenbrand
Cc: Stefan Roesch, kernel-team, linux-mm, riel, mhocko,
linux-kselftest, linux-doc, akpm
Hey David,
On Wed, Mar 08, 2023 at 06:01:14PM +0100, David Hildenbrand wrote:
> For some reason gmail thought it would be a good ideas to move this into the
> SPAM folder, so I only saw the recent replies just now.
>
> I'm going to have a look at this soonish.
Thanks! More eyes are always helpful.
> One point that popped up in the past and that I raised on the last RFC: we
> should think about letting processes *opt out/disable* KSM on their own.
> Either completely, or for selected VMAs.
>
> Reasoning is, that if you have an application that really doesn't want some
> memory regions to be applicable to KSM (memory de-duplication attacks?
> Knowing that KSM on some regions will be counter-productive)
>
> For example, remembering if MADV_UNMERGEABLE was called and not only
> clearing the VMA flag. So even if KSM would be force-enabled by some tooling
> after the process started, such regions would not get considered for KSM.
>
> It would a bit like how we handle THP.
I'm not sure the THP comparison is apt. THP is truly a local
optimization that depends on the workload's access patterns. The
environment isn't a true factor. It makes some sense that if there is
a global policy to generally use THP the workload be able to opt out
based on known sparse access patterns. At least until THP allocation
strategy inside the kernel becomes smarter!
Merging opportunities and security questions are trickier. The
application might know which data is sensitive, but it doesn't know
whether its environment is safe or subject do memory attacks, so it
cannot make that decision purely from inside.
There is a conceivable usecase where multiple instances of the same
job are running inside a safe shared security domain and using the
same sensitive data.
There is a conceivable usecase where the system and the workload
collaborate to merge insensitive data across security domains.
I'm honestly not sure which usecase is more likely. My gut feeling is
the first one, simply because of broader concerns of multiple security
domains sharing kernel instances or physical hardware.
> On 24.02.23 05:39, Stefan Roesch wrote:
> > So far KSM can only be enabled by calling madvise for memory regions. To
> > be able to use KSM for more workloads, KSM needs to have the ability to be
> > enabled / disabled at the process / cgroup level.
> >
> > Use case 1:
> > The madvise call is not available in the programming language. An example for
> > this are programs with forked workloads using a garbage collected language without
> > pointers. In such a language madvise cannot be made available.
> >
> > In addition the addresses of objects get moved around as they are garbage
> > collected. KSM sharing needs to be enabled "from the outside" for these type of
> > workloads.
> >
> > Use case 2:
> > The same interpreter can also be used for workloads where KSM brings no
> > benefit or even has overhead. We'd like to be able to enable KSM on a workload
> > by workload basis.
> >
> > Use case 3:
> > With the madvise call sharing opportunities are only enabled for the current
> > process: it is a workload-local decision. A considerable number of sharing
> > opportuniites may exist across multiple workloads or jobs. Only a higler level
> > entity like a job scheduler or container can know for certain if its running
> > one or more instances of a job. That job scheduler however doesn't have
> > the necessary internal worklaod knowledge to make targeted madvise calls.
> >
> > Security concerns:
> > In previous discussions security concerns have been brought up. The problem is
> > that an individual workload does not have the knowledge about what else is
> > running on a machine. Therefore it has to be very conservative in what memory
> > areas can be shared or not. However, if the system is dedicated to running
> > multiple jobs within the same security domain, its the job scheduler that has
> > the knowledge that sharing can be safely enabled and is even desirable.
>
> Note that there are some papers about why limiting memory deduplciation
> attacks to single security domains is not sufficient. Especially, the remote
> deduplication attacks fall into that category IIRC.
I think it would be good to elaborate on that and include any caveats
in the documentation.
Ultimately, the bar isn't whether there are attack vectors on a subset
of possible usecases, but whether there are usecases where this can be
used safely, which is obviously true.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v3 0/3] mm: process/cgroup ksm support
2023-03-08 17:30 ` Johannes Weiner
@ 2023-03-08 18:41 ` David Hildenbrand
0 siblings, 0 replies; 17+ messages in thread
From: David Hildenbrand @ 2023-03-08 18:41 UTC (permalink / raw)
To: Johannes Weiner
Cc: Stefan Roesch, kernel-team, linux-mm, riel, mhocko,
linux-kselftest, linux-doc, akpm
>> One point that popped up in the past and that I raised on the last RFC: we
>> should think about letting processes *opt out/disable* KSM on their own.
>> Either completely, or for selected VMAs.
>>
>> Reasoning is, that if you have an application that really doesn't want some
>> memory regions to be applicable to KSM (memory de-duplication attacks?
>> Knowing that KSM on some regions will be counter-productive)
>>
>> For example, remembering if MADV_UNMERGEABLE was called and not only
>> clearing the VMA flag. So even if KSM would be force-enabled by some tooling
>> after the process started, such regions would not get considered for KSM.
>>
>> It would a bit like how we handle THP.
>
> I'm not sure the THP comparison is apt. THP is truly a local
> optimization that depends on the workload's access patterns. The
> environment isn't a true factor. It makes some sense that if there is
> a global policy to generally use THP the workload be able to opt out
> based on known sparse access patterns. At least until THP allocation
> strategy inside the kernel becomes smarter!
Yes, and some features really don't want THP, at least for some period
of time (e.g., userfaultfd), because they are to some degree
incompatible with the idea of THP populating more memory than was accessed.
Page pinning + KSM was one of the remaining cases where force-enabling
KSM could have made a real difference (IOW buggy) that we discussed the
last time this was proposed. That should be fixed now. I guess besides
that, most features should be compatible with KSM nowadays. So
force-enabling it should not result in actual issues I guess.
>
> Merging opportunities and security questions are trickier. The
> application might know which data is sensitive, but it doesn't know
> whether its environment is safe or subject do memory attacks, so it
> cannot make that decision purely from inside.
I agree regarding security. Regarding merging opportunities, I am not so
sure. There are certainly examples where an application knows best that
memory deduplication is mostly a lost bet (if a lot of randomization or
pointers are involved most probably).
>
> There is a conceivable usecase where multiple instances of the same
> job are running inside a safe shared security domain and using the
> same sensitive data.
Yes. IMHO, such special applications could just enable KSM manually,
though, instead of enabling it for each and every last piece of
anonymous memory that doesn't make sense to get deduplciated :)
But of course, I see the simplicity in just enabling it globally.
>
> There is a conceivable usecase where the system and the workload
> collaborate to merge insensitive data across security domains.
>
> I'm honestly not sure which usecase is more likely. My gut feeling is
> the first one, simply because of broader concerns of multiple security
> domains sharing kernel instances or physical hardware.
>
See my side note below.
>> On 24.02.23 05:39, Stefan Roesch wrote:
>>> So far KSM can only be enabled by calling madvise for memory regions. To
>>> be able to use KSM for more workloads, KSM needs to have the ability to be
>>> enabled / disabled at the process / cgroup level.
>>>
>>> Use case 1:
>>> The madvise call is not available in the programming language. An example for
>>> this are programs with forked workloads using a garbage collected language without
>>> pointers. In such a language madvise cannot be made available.
>>>
>>> In addition the addresses of objects get moved around as they are garbage
>>> collected. KSM sharing needs to be enabled "from the outside" for these type of
>>> workloads.
>>>
>>> Use case 2:
>>> The same interpreter can also be used for workloads where KSM brings no
>>> benefit or even has overhead. We'd like to be able to enable KSM on a workload
>>> by workload basis.
>>>
>>> Use case 3:
>>> With the madvise call sharing opportunities are only enabled for the current
>>> process: it is a workload-local decision. A considerable number of sharing
>>> opportuniites may exist across multiple workloads or jobs. Only a higler level
>>> entity like a job scheduler or container can know for certain if its running
>>> one or more instances of a job. That job scheduler however doesn't have
>>> the necessary internal worklaod knowledge to make targeted madvise calls.
>>>
>>> Security concerns:
>>> In previous discussions security concerns have been brought up. The problem is
>>> that an individual workload does not have the knowledge about what else is
>>> running on a machine. Therefore it has to be very conservative in what memory
>>> areas can be shared or not. However, if the system is dedicated to running
>>> multiple jobs within the same security domain, its the job scheduler that has
>>> the knowledge that sharing can be safely enabled and is even desirable.
>>
>> Note that there are some papers about why limiting memory deduplciation
>> attacks to single security domains is not sufficient. Especially, the remote
>> deduplication attacks fall into that category IIRC.
>
> I think it would be good to elaborate on that and include any caveats
> in the documentation.
Yes. The main point I would make is that we should encourage eventual
users to think twice instead of blindly enabling this feature. Good
documentation is certainly helpful.
>
> Ultimately, the bar isn't whether there are attack vectors on a subset
> of possible usecases, but whether there are usecases where this can be
> used safely, which is obviously true.
I agree. But still I have to raise that the security implications might
be rather subtle and surprising (e.g., single security domain). Sure,
there are setups that certainly don't care, I totally agree.
Side note:
Of course, I wonder how many workloads would place identical data into
anonymous memory where it would have to get deduplicated instead, say,
mmaping a file instead.
In the VM world it all makes sense to me, because the kernel, libraries,
...executables may be identical and loaded into guest memory (->
anonymous memory) where we'd just wish to deduplciate them. In ordinary
process, I'm not so sure how much deduplication potential there really
is once pointers etc. are involved and memory allocators go crazy on
placing unrelated data into the same page. There is one prime example,
though, that might be different, which is the shared zeropage I guess.
I'd be curious which data the mentioned 20% actually deduplicate:
according to [1], some workloads mostly only deduplicate the shared
zeropage (in their Microsoft Edge scenario, 84% -- 93% of all
deduplicated pages are zeropage). Deduplicating the shared zeropage is
obviously "less security" relevant and one could optimize KSM easily to
only try deduplicating that and avoid a lot of unstable nodes.
Of course, just a thought on memory deduplication on process level.
[1] https://ieeexplore.ieee.org/document/7546546
--
Thanks,
David / dhildenb
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v3 1/3] mm: add new api to enable ksm per process
2023-03-08 16:47 ` Johannes Weiner
@ 2023-03-08 22:16 ` Stefan Roesch
2023-03-09 4:59 ` Johannes Weiner
0 siblings, 1 reply; 17+ messages in thread
From: Stefan Roesch @ 2023-03-08 22:16 UTC (permalink / raw)
To: Johannes Weiner
Cc: kernel-team, linux-mm, riel, mhocko, david, linux-kselftest,
linux-doc, akpm
Johannes Weiner <hannes@cmpxchg.org> writes:
> On Thu, Feb 23, 2023 at 08:39:58PM -0800, Stefan Roesch wrote:
>> This adds a new prctl to API to enable and disable KSM on a per process
>> basis instead of only at the VMA basis (with madvise).
>>
>> 1) Introduce new MMF_VM_MERGE_ANY flag
>>
>> This introduces the new flag MMF_VM_MERGE_ANY flag. When this flag is
>> set, kernel samepage merging (ksm) gets enabled for all vma's of a
>> process.
>>
>> 2) add flag to __ksm_enter
>>
>> This change adds the flag parameter to __ksm_enter. This allows to
>> distinguish if ksm was called by prctl or madvise.
>>
>> 3) add flag to __ksm_exit call
>>
>> This adds the flag parameter to the __ksm_exit() call. This allows to
>> distinguish if this call is for an prctl or madvise invocation.
>>
>> 4) invoke madvise for all vmas in scan_get_next_rmap_item
>>
>> If the new flag MMF_VM_MERGE_ANY has been set for a process, iterate
>> over all the vmas and enable ksm if possible. For the vmas that can be
>> ksm enabled this is only done once.
>>
>> 5) support disabling of ksm for a process
>>
>> This adds the ability to disable ksm for a process if ksm has been
>> enabled for the process.
>>
>> 6) add new prctl option to get and set ksm for a process
>>
>> This adds two new options to the prctl system call
>> - enable ksm for all vmas of a process (if the vmas support it).
>> - query if ksm has been enabled for a process.
>>
>> Signed-off-by: Stefan Roesch <shr@devkernel.io>
>
> Hey Stefan, thanks for merging the patches into one. I found it much
> easier to review.
>
> Overall this looks straight-forward to me. A few comments below:
>
>> @@ -2659,6 +2660,34 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
>> case PR_SET_VMA:
>> error = prctl_set_vma(arg2, arg3, arg4, arg5);
>> break;
>> +#ifdef CONFIG_KSM
>> + case PR_SET_MEMORY_MERGE:
>> + if (!capable(CAP_SYS_RESOURCE))
>> + return -EPERM;
>> +
>> + if (arg2) {
>> + if (mmap_write_lock_killable(me->mm))
>> + return -EINTR;
>> +
>> + if (test_bit(MMF_VM_MERGEABLE, &me->mm->flags))
>> + error = -EINVAL;
>
> So if the workload has already madvised specific VMAs the
> process-enablement will fail. Why is that? Shouldn't it be possible to
> override a local decision from an outside context that has more
> perspective on both sharing opportunities and security aspects?
>
> If there is a good reason for it, the -EINVAL should be addressed in
> the manpage. And maybe add a comment here as well.
>
This makes sense, I'll remove the check above.
>> + else if (!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags))
>> + error = __ksm_enter(me->mm, MMF_VM_MERGE_ANY);
>> + mmap_write_unlock(me->mm);
>> + } else {
>> + __ksm_exit(me->mm, MMF_VM_MERGE_ANY);
>> + }
>> + break;
>> + case PR_GET_MEMORY_MERGE:
>> + if (!capable(CAP_SYS_RESOURCE))
>> + return -EPERM;
>> +
>> + if (arg2 || arg3 || arg4 || arg5)
>> + return -EINVAL;
>> +
>> + error = !!test_bit(MMF_VM_MERGE_ANY, &me->mm->flags);
>> + break;
>> +#endif
>> default:
>> error = -EINVAL;
>> break;
>> diff --git a/mm/ksm.c b/mm/ksm.c
>> index 56808e3bfd19..23d6944f78ad 100644
>> --- a/mm/ksm.c
>> +++ b/mm/ksm.c
>> @@ -1063,6 +1063,7 @@ static int unmerge_and_remove_all_rmap_items(void)
>>
>> mm_slot_free(mm_slot_cache, mm_slot);
>> clear_bit(MMF_VM_MERGEABLE, &mm->flags);
>> + clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
>> mmdrop(mm);
>> } else
>> spin_unlock(&ksm_mmlist_lock);
>> @@ -2329,6 +2330,17 @@ static struct ksm_rmap_item *get_next_rmap_item(struct ksm_mm_slot *mm_slot,
>> return rmap_item;
>> }
>>
>> +static bool vma_ksm_mergeable(struct vm_area_struct *vma)
>> +{
>> + if (vma->vm_flags & VM_MERGEABLE)
>> + return true;
>> +
>> + if (test_bit(MMF_VM_MERGE_ANY, &vma->vm_mm->flags))
>> + return true;
>> +
>> + return false;
>> +}
>> +
>> static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
>> {
>> struct mm_struct *mm;
>> @@ -2405,8 +2417,20 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
>> goto no_vmas;
>>
>> for_each_vma(vmi, vma) {
>> - if (!(vma->vm_flags & VM_MERGEABLE))
>> + if (!vma_ksm_mergeable(vma))
>> continue;
>> + if (!(vma->vm_flags & VM_MERGEABLE)) {
>
> IMO, the helper obscures the interaction between the vma flag and the
> per-process flag here. How about:
>
> if (!(vma->vm_flags & VM_MERGEABLE)) {
> if (!test_bit(MMF_VM_MERGE_ANY, &vma->vm_mm->flags))
> continue;
>
> /*
> * With per-process merging enabled, have the MM scan
> * enroll any existing and new VMAs on the fly.
> *
> ksm_madvise();
> }
>
>> + unsigned long flags = vma->vm_flags;
>> +
>> + /* madvise failed, use next vma */
>> + if (ksm_madvise(vma, vma->vm_start, vma->vm_end, MADV_MERGEABLE, &flags))
>> + continue;
>> + /* vma, not supported as being mergeable */
>> + if (!(flags & VM_MERGEABLE))
>> + continue;
>> +
>> + vm_flags_set(vma, VM_MERGEABLE);
>
> I don't understand the local flags. Can't it pass &vma->vm_flags to
> ksm_madvise()? It'll set VM_MERGEABLE on success. And you know it
> wasn't set before because the whole thing is inside the !set
> branch. The return value doesn't seem super useful, it's only the flag
> setting that matters:
>
> ksm_madvise(vma, vma->vm_start, vma->vm_end, MADV_MERGEABLE, &vma->vm_flags);
> /* madvise can fail, and will skip special vmas (pfnmaps and such) */
> if (!(vma->vm_flags & VM_MERGEABLE))
> continue;
>
vm_flags is defined as const. I cannot pass it directly inside the
function, this is the reason, I'm using a local variable for it.
>> + }
>> if (ksm_scan.address < vma->vm_start)
>> ksm_scan.address = vma->vm_start;
>> if (!vma->anon_vma)
>> @@ -2491,6 +2515,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
>>
>> mm_slot_free(mm_slot_cache, mm_slot);
>> clear_bit(MMF_VM_MERGEABLE, &mm->flags);
>> + clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
>> mmap_read_unlock(mm);
>> mmdrop(mm);
>> } else {
>
>> @@ -2664,12 +2690,39 @@ int __ksm_enter(struct mm_struct *mm)
>> return 0;
>> }
>>
>> -void __ksm_exit(struct mm_struct *mm)
>> +static void unmerge_vmas(struct mm_struct *mm)
>> +{
>> + struct vm_area_struct *vma;
>> + struct vma_iterator vmi;
>> +
>> + vma_iter_init(&vmi, mm, 0);
>> +
>> + mmap_read_lock(mm);
>> + for_each_vma(vmi, vma) {
>> + if (vma->vm_flags & VM_MERGEABLE) {
>> + unsigned long flags = vma->vm_flags;
>> +
>> + if (ksm_madvise(vma, vma->vm_start, vma->vm_end, MADV_UNMERGEABLE, &flags))
>> + continue;
>> +
>> + vm_flags_clear(vma, VM_MERGEABLE);
>
> ksm_madvise() tests and clears VM_MERGEABLE, so AFAICS
>
> for_each_vma(vmi, vma)
> ksm_madvise();
>
> should do it...
>
This is the same problem. vma->vm_flags is defined as const.
+ if (vma->vm_flags & VM_MERGEABLE) {
This will be removed.
>> + }
>> + }
>> + mmap_read_unlock(mm);
>> +}
>> +
>> +void __ksm_exit(struct mm_struct *mm, int flag)
>> {
>> struct ksm_mm_slot *mm_slot;
>> struct mm_slot *slot;
>> int easy_to_free = 0;
>>
>> + if (!(current->flags & PF_EXITING) && flag == MMF_VM_MERGE_ANY &&
>> + test_bit(MMF_VM_MERGE_ANY, &mm->flags)) {
>> + clear_bit(MMF_VM_MERGE_ANY, &mm->flags);
>> + unmerge_vmas(mm);
>
> ...and then it's short enough to just open-code it here and drop the
> unmerge_vmas() helper.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v3 1/3] mm: add new api to enable ksm per process
2023-03-08 22:16 ` Stefan Roesch
@ 2023-03-09 4:59 ` Johannes Weiner
2023-03-09 22:33 ` Stefan Roesch
0 siblings, 1 reply; 17+ messages in thread
From: Johannes Weiner @ 2023-03-09 4:59 UTC (permalink / raw)
To: Stefan Roesch
Cc: kernel-team, linux-mm, riel, mhocko, david, linux-kselftest,
linux-doc, akpm
On Wed, Mar 08, 2023 at 02:16:36PM -0800, Stefan Roesch wrote:
> Johannes Weiner <hannes@cmpxchg.org> writes:
> > On Thu, Feb 23, 2023 at 08:39:58PM -0800, Stefan Roesch wrote:
> >> @@ -2405,8 +2417,20 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
> >> goto no_vmas;
> >>
> >> for_each_vma(vmi, vma) {
> >> - if (!(vma->vm_flags & VM_MERGEABLE))
> >> + if (!vma_ksm_mergeable(vma))
> >> continue;
> >> + if (!(vma->vm_flags & VM_MERGEABLE)) {
> >
> > IMO, the helper obscures the interaction between the vma flag and the
> > per-process flag here. How about:
> >
> > if (!(vma->vm_flags & VM_MERGEABLE)) {
> > if (!test_bit(MMF_VM_MERGE_ANY, &vma->vm_mm->flags))
> > continue;
> >
> > /*
> > * With per-process merging enabled, have the MM scan
> > * enroll any existing and new VMAs on the fly.
> > *
> > ksm_madvise();
> > }
> >
> >> + unsigned long flags = vma->vm_flags;
> >> +
> >> + /* madvise failed, use next vma */
> >> + if (ksm_madvise(vma, vma->vm_start, vma->vm_end, MADV_MERGEABLE, &flags))
> >> + continue;
> >> + /* vma, not supported as being mergeable */
> >> + if (!(flags & VM_MERGEABLE))
> >> + continue;
> >> +
> >> + vm_flags_set(vma, VM_MERGEABLE);
> >
> > I don't understand the local flags. Can't it pass &vma->vm_flags to
> > ksm_madvise()? It'll set VM_MERGEABLE on success. And you know it
> > wasn't set before because the whole thing is inside the !set
> > branch. The return value doesn't seem super useful, it's only the flag
> > setting that matters:
> >
> > ksm_madvise(vma, vma->vm_start, vma->vm_end, MADV_MERGEABLE, &vma->vm_flags);
> > /* madvise can fail, and will skip special vmas (pfnmaps and such) */
> > if (!(vma->vm_flags & VM_MERGEABLE))
> > continue;
> >
>
> vm_flags is defined as const. I cannot pass it directly inside the
> function, this is the reason, I'm using a local variable for it.
Oops, good catch.
However, while looking at the flag helpers, I'm also realizing that
modifications requires the mmap_sem in write mode, which this code
doesn't. This function might potentially scan the entire process
address space, so you can't just change the lock mode, either.
Staring more at this, do you actually need to set VM_MERGEABLE on the
individual vmas? There are only a few places that check VM_MERGEABLE,
and AFAICS they can all just check for MMF_VM_MERGE_ANY also.
You'd need to factor out the vma compatibility checks from
ksm_madvise(), and skip over special vmas during the mm scan. But
those tests are all stable under the read lock, so that's fine.
The other thing ksm_madvise() does is ksm_enter() - but that's
obviously not needed from inside the loop over ksm_enter'd mms. :)
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH v3 1/3] mm: add new api to enable ksm per process
2023-03-09 4:59 ` Johannes Weiner
@ 2023-03-09 22:33 ` Stefan Roesch
0 siblings, 0 replies; 17+ messages in thread
From: Stefan Roesch @ 2023-03-09 22:33 UTC (permalink / raw)
To: Johannes Weiner
Cc: kernel-team, linux-mm, riel, mhocko, david, linux-kselftest,
linux-doc, akpm
Johannes Weiner <hannes@cmpxchg.org> writes:
> On Wed, Mar 08, 2023 at 02:16:36PM -0800, Stefan Roesch wrote:
>> Johannes Weiner <hannes@cmpxchg.org> writes:
>> > On Thu, Feb 23, 2023 at 08:39:58PM -0800, Stefan Roesch wrote:
>> >> @@ -2405,8 +2417,20 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
>> >> goto no_vmas;
>> >>
>> >> for_each_vma(vmi, vma) {
>> >> - if (!(vma->vm_flags & VM_MERGEABLE))
>> >> + if (!vma_ksm_mergeable(vma))
>> >> continue;
>> >> + if (!(vma->vm_flags & VM_MERGEABLE)) {
>> >
>> > IMO, the helper obscures the interaction between the vma flag and the
>> > per-process flag here. How about:
>> >
>> > if (!(vma->vm_flags & VM_MERGEABLE)) {
>> > if (!test_bit(MMF_VM_MERGE_ANY, &vma->vm_mm->flags))
>> > continue;
>> >
>> > /*
>> > * With per-process merging enabled, have the MM scan
>> > * enroll any existing and new VMAs on the fly.
>> > *
>> > ksm_madvise();
>> > }
>> >
>> >> + unsigned long flags = vma->vm_flags;
>> >> +
>> >> + /* madvise failed, use next vma */
>> >> + if (ksm_madvise(vma, vma->vm_start, vma->vm_end, MADV_MERGEABLE, &flags))
>> >> + continue;
>> >> + /* vma, not supported as being mergeable */
>> >> + if (!(flags & VM_MERGEABLE))
>> >> + continue;
>> >> +
>> >> + vm_flags_set(vma, VM_MERGEABLE);
>> >
>> > I don't understand the local flags. Can't it pass &vma->vm_flags to
>> > ksm_madvise()? It'll set VM_MERGEABLE on success. And you know it
>> > wasn't set before because the whole thing is inside the !set
>> > branch. The return value doesn't seem super useful, it's only the flag
>> > setting that matters:
>> >
>> > ksm_madvise(vma, vma->vm_start, vma->vm_end, MADV_MERGEABLE, &vma->vm_flags);
>> > /* madvise can fail, and will skip special vmas (pfnmaps and such) */
>> > if (!(vma->vm_flags & VM_MERGEABLE))
>> > continue;
>> >
>>
>> vm_flags is defined as const. I cannot pass it directly inside the
>> function, this is the reason, I'm using a local variable for it.
>
> Oops, good catch.
>
> However, while looking at the flag helpers, I'm also realizing that
> modifications requires the mmap_sem in write mode, which this code
> doesn't. This function might potentially scan the entire process
> address space, so you can't just change the lock mode, either.
>
> Staring more at this, do you actually need to set VM_MERGEABLE on the
> individual vmas? There are only a few places that check VM_MERGEABLE,
> and AFAICS they can all just check for MMF_VM_MERGE_ANY also.
>
> You'd need to factor out the vma compatibility checks from
> ksm_madvise(), and skip over special vmas during the mm scan. But
> those tests are all stable under the read lock, so that's fine.
>
> The other thing ksm_madvise() does is ksm_enter() - but that's
> obviously not needed from inside the loop over ksm_enter'd mms. :)
The check alone for MMF_VM_MERGE_ANY is not sufficient. We also
need to check if the respective VMA is mergeable. I'll split off the
checks in ksm_madvise to its own function, so it can be called from
where VM_MERGEABLE is currently checked.
With the above change, the function unmerge_vmas is no longer needed.
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2023-03-09 22:38 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-02-24 4:39 [PATCH v3 0/3] mm: process/cgroup ksm support Stefan Roesch
2023-02-24 4:39 ` [PATCH v3 1/3] mm: add new api to enable ksm per process Stefan Roesch
2023-03-08 16:47 ` Johannes Weiner
2023-03-08 22:16 ` Stefan Roesch
2023-03-09 4:59 ` Johannes Weiner
2023-03-09 22:33 ` Stefan Roesch
2023-02-24 4:39 ` [PATCH v3 2/3] mm: add new KSM process and sysfs knobs Stefan Roesch
2023-02-24 4:40 ` [PATCH v3 3/3] selftests/mm: add new selftests for KSM Stefan Roesch
2023-02-26 5:30 ` Andrew Morton
2023-02-27 17:19 ` Stefan Roesch
2023-02-27 17:24 ` Mathieu Desnoyers
2023-02-26 5:08 ` [PATCH v3 0/3] mm: process/cgroup ksm support Andrew Morton
2023-02-27 17:13 ` Stefan Roesch
2023-03-07 18:48 ` Stefan Roesch
2023-03-08 17:01 ` David Hildenbrand
2023-03-08 17:30 ` Johannes Weiner
2023-03-08 18:41 ` David Hildenbrand
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).