* [PATCH v22 0/4] implement getrandom() in vDSO
@ 2024-07-09 13:05 Jason A. Donenfeld
2024-07-09 13:05 ` [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings Jason A. Donenfeld
` (3 more replies)
0 siblings, 4 replies; 39+ messages in thread
From: Jason A. Donenfeld @ 2024-07-09 13:05 UTC (permalink / raw)
To: linux-kernel, patches, tglx
Cc: Jason A. Donenfeld, linux-crypto, linux-api, x86, Linus Torvalds,
Greg Kroah-Hartman, Adhemerval Zanella Netto, Carlos O'Donell,
Florian Weimer, Arnd Bergmann, Jann Horn, Christian Brauner,
David Hildenbrand
The plan for this series is to take it through my random.git tree for 6.11.
It's cooking in linux-next now.
Changes v21->v22:
- Only add MAP_DROPPABLE, not the other MAP_*s, but make it imply the other
relevant flags.
- Ensure that mlock() and madvise() can't undo MAP_DROPPABLE implications.
- Since MAP_DROPPABLE is generally useful, remove conditional Kconfig
scafolding around it.
- Follow mm/ standards on comment style.
- Base atop latest selftest PR, to avoid merge conflicts in 6.11.
- Update glibc patches.
Changes v20->v21:
- After extensive conversation with Linus, we're nixing the entire
vgetrandom_alloc() syscall, in favor of just exposing the functionality
needed through mmap() and having the kernel communicate to the caller what
arguments/sizes it should pass to mmap(). This simplifies the series
considerably. It also means that the first commit adds some new MAP_*
constants for mmap().
- Separate vDSO selftests out into separate commit.
--------------
Useful links:
- This series:
- https://git.kernel.org/pub/scm/linux/kernel/git/crng/random.git/log/
- Glibc patches by Adhemerval and me against glibc-2.39:
- https://git.zx2c4.com/glibc/log/?h=vdso
- In case you're actually interested in the v≤14 design where faults were
non-fatal and instructions were skipped (which I think is more coherent, even
if the implementation is controversial), this lives in my branch here:
- https://git.kernel.org/pub/scm/linux/kernel/git/crng/random.git/log/?h=jd/vdso-skip-insn
Note that I'm *not* actually proposing this for upstream at this time. But it
may be of conversational interest.
-------------
Two statements:
1) Userspace wants faster cryptographically secure random numbers of
arbitrary size, big or small.
2) Userspace is currently unable to safely roll its own RNG with the
same security profile as getrandom().
Statement (1) has been debated for years, with arguments ranging from
"we need faster cryptographically secure card shuffling!" to "the only
things that actually need good randomness are keys, which are few and
far between" to "actually, TLS CBC nonces are frequent" and so on. I
don't intend to wade into that debate substantially, except to note that
recently glibc added arc4random(), whose goal is to return a
cryptographically secure uint32_t, and there are real user reports of it
being too slow. So here we are.
Statement (2) is more interesting. The kernel is the nexus of all
entropic inputs that influence the RNG. It is in the best position, and
probably the only position, to decide anything at all about the current
state of the RNG and of its entropy. One of the things it uniquely knows
about is when reseeding is necessary.
For example, when a virtual machine is forked, restored, or duplicated,
it's imparative that the RNG doesn't generate the same outputs. For this
reason, there's a small protocol between hypervisors and the kernel that
indicates this has happened, alongside some ID, which the RNG uses to
immediately reseed, so as not to return the same numbers. Were userspace
to expand a getrandom() seed from time T1 for the next hour, and at some
point T2 < hour, the virtual machine forked, userspace would continue to
provide the same numbers to two (or more) different virtual machines,
resulting in potential cryptographic catastrophe. Something similar
happens on resuming from hibernation (or even suspend), with various
compromise scenarios there in mind.
There's a more general reason why userspace rolling its own RNG from a
getrandom() seed is fraught. There's a lot of attention paid to this
particular Linuxism we have of the RNG being initialized and thus
non-blocking or uninitialized and thus blocking until it is initialized.
These are our Two Big States that many hold to be the holy
differentiating factor between safe and not safe, between
cryptographically secure and garbage. The fact is, however, that the
distinction between these two states is a hand-wavy wishy-washy inexact
approximation. Outside of a few exceptional cases (e.g. a HW RNG is
available), we actually don't really ever know with any rigor at all
when the RNG is safe and ready (nor when it's compromised). We do the
best we can to "estimate" it, but entropy estimation is fundamentally
impossible in the general case. So really, we're just doing guess work,
and hoping it's good and conservative enough. Let's then assume that
there's always some potential error involved in this differentiator.
In fact, under the surface, the RNG is engineered around a different
principle, and that is trying to *use* new entropic inputs regularly and
at the right specific moments in time. For example, close to boot time,
the RNG reseeds itself more often than later. At certain events, like VM
fork, the RNG reseeds itself immediately. The various heuristics for
when the RNG will use new entropy and how often is really a core aspect
of what the RNG has some potential to do decently enough (and something
that will probably continue to improve in the future from random.c's
present set of algorithms). So in your mind, put away the metal
attachment to the Two Big States, which represent an approximation with
a potential margin of error. Instead keep in mind that the RNG's primary
operating heuristic is how often and exactly when it's going to reseed.
So, if userspace takes a seed from getrandom() at point T1, and uses it
for the next hour (or N megabytes or some other meaningless metric),
during that time, potential errors in the Two Big States approximation
are amplified. During that time potential reseeds are being lost,
forgotten, not reflected in the output stream. That's not good.
The simplest statement you could make is that userspace RNGs that expand
a getrandom() seed at some point T1 are nearly always *worse*, in some
way, than just calling getrandom() every time a random number is
desired.
For those reasons, after some discussion on libc-alpha, glibc's
arc4random() now just calls getrandom() on each invocation. That's
trivially safe, and gives us latitude to then make the safe thing faster
without becoming unsafe at our leasure. Card shuffling isn't
particularly fast, however.
How do we rectify this? By putting a safe implementation of getrandom()
in the vDSO, which has access to whatever information a
particular iteration of random.c is using to make its decisions. I use
that careful language of "particular iteration of random.c", because the
set of things that a vDSO getrandom() implementation might need for making
decisions as good as the kernel's will likely change over time. This
isn't just a matter of exporting certain *data* to userspace. We're not
going to commit to a "data API" where the various heuristics used are
exposed, locking in how the kernel works for decades to come, and then
leave it to various userspaces to roll something on top and shoot
themselves in the foot and have all sorts of complexity disasters.
Rather, vDSO getrandom() is supposed to be the *same exact algorithm*
that runs in the kernel, except it's been hoisted into userspace as
much as possible. And so vDSO getrandom() and kernel getrandom() will
always mirror each other hermetically.
API-wise, the vDSO gains this function:
ssize_t vgetrandom(void *buffer, size_t len, unsigned int flags,
void *opaque_state, size_t opaque_len);
The return value and the first 3 arguments are the same as ordinary
getrandom(), while the penultimate argument is a pointer to some state
allocated with the right flags passed to mmap(2), explained below. Were all
five arguments passed to the getrandom syscall, nothing different would happen,
and the functions would have the exact same behavior.
If vgetrandom(NULL, 0, 0, ¶ms, ~0UL) is called, then params gets populated
with information about what flags and prot fields to pass to mmap(2), as well
as how big each state should be, so that the caller can slice up returned
memory from mmap(2) into chunks for passing to vgetrandom().
Libc is expected to allocate a chunk of these on first use, and then
dole them out to threads as they're created, allocating more when
needed.
The interesting meat of the implementation is in lib/vdso/getrandom.c,
as generic C code, and it aims to mainly follow random.c's buffered fast
key erasure logic. Before the RNG is initialized, it falls back to the
syscall. Right now it uses a simple generation counter to make its decisions
on reseeding (though this could be made more extensive over time).
The actual place that has the most work to do is in all of the other
files. Most of the vDSO shared page infrastructure is centered around
gettimeofday, and so the main structs are all in arrays for different
timestamp types, and attached to time namespaces, and so forth. I've
done the best I could to add onto this in an unintrusive way.
In my test results, performance is pretty stellar (around 15x for uint32_t
generation), and it seems to be working. There's an extended example in the
last commit of this series, showing how the syscall and the vDSO function
are meant to be used together.
Cc: linux-crypto@vger.kernel.org
Cc: linux-api@vger.kernel.org
Cc: x86@kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Adhemerval Zanella Netto <adhemerval.zanella@linaro.org>
Cc: Carlos O'Donell <carlos@redhat.com>
Cc: Florian Weimer <fweimer@redhat.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Jann Horn <jannh@google.com>
Cc: Christian Brauner <brauner@kernel.org>
Cc: David Hildenbrand <dhildenb@redhat.com>
Jason A. Donenfeld (4):
mm: add MAP_DROPPABLE for designating always lazily freeable mappings
random: introduce generic vDSO getrandom() implementation
x86: vdso: Wire up getrandom() vDSO implementation
selftests/vDSO: add tests for vgetrandom
MAINTAINERS | 4 +
arch/x86/Kconfig | 1 +
arch/x86/entry/vdso/Makefile | 3 +-
arch/x86/entry/vdso/vdso.lds.S | 2 +
arch/x86/entry/vdso/vgetrandom-chacha.S | 178 +++++++++++
arch/x86/entry/vdso/vgetrandom.c | 17 ++
arch/x86/include/asm/vdso/getrandom.h | 55 ++++
arch/x86/include/asm/vdso/vsyscall.h | 2 +
arch/x86/include/asm/vvar.h | 16 +
drivers/char/random.c | 18 +-
fs/proc/task_mmu.c | 1 +
include/linux/mm.h | 7 +
include/trace/events/mmflags.h | 7 +
include/uapi/linux/mman.h | 1 +
include/uapi/linux/random.h | 15 +
include/vdso/datapage.h | 11 +
include/vdso/getrandom.h | 46 +++
lib/vdso/Kconfig | 5 +
lib/vdso/getrandom.c | 251 +++++++++++++++
mm/madvise.c | 5 +-
mm/mlock.c | 2 +-
mm/mmap.c | 30 ++
mm/rmap.c | 22 +-
tools/include/asm/rwonce.h | 0
tools/include/uapi/linux/mman.h | 1 +
tools/testing/selftests/mm/.gitignore | 1 +
tools/testing/selftests/mm/Makefile | 1 +
tools/testing/selftests/mm/droppable.c | 53 ++++
tools/testing/selftests/vDSO/.gitignore | 2 +
tools/testing/selftests/vDSO/Makefile | 18 ++
.../testing/selftests/vDSO/vdso_test_chacha.c | 43 +++
.../selftests/vDSO/vdso_test_getrandom.c | 288 ++++++++++++++++++
32 files changed, 1099 insertions(+), 7 deletions(-)
create mode 100644 arch/x86/entry/vdso/vgetrandom-chacha.S
create mode 100644 arch/x86/entry/vdso/vgetrandom.c
create mode 100644 arch/x86/include/asm/vdso/getrandom.h
create mode 100644 include/vdso/getrandom.h
create mode 100644 lib/vdso/getrandom.c
create mode 100644 tools/include/asm/rwonce.h
create mode 100644 tools/testing/selftests/mm/droppable.c
create mode 100644 tools/testing/selftests/vDSO/vdso_test_chacha.c
create mode 100644 tools/testing/selftests/vDSO/vdso_test_getrandom.c
base-commit: 22a40d14b572deb80c0648557f4bd502d7e83826
prerequisite-patch-id: 9a45c4b77033012b2c2cbbec24fd8b2a7a5daf84
prerequisite-patch-id: 8b773921433de1e8b9fd5a8f3d6107258c133c2a
prerequisite-patch-id: afd1b07bd24fe3c93d1fef782ba9064e95d1534c
prerequisite-patch-id: a5cbcafe6072a173a8f20eac5cc7e545be50ae20
prerequisite-patch-id: 59640753e9c60e5d23ede9a20ed5c933a47b3f97
--
2.45.2
^ permalink raw reply [flat|nested] 39+ messages in thread
* [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-09 13:05 [PATCH v22 0/4] implement getrandom() in vDSO Jason A. Donenfeld
@ 2024-07-09 13:05 ` Jason A. Donenfeld
2024-07-10 3:27 ` David Hildenbrand
2024-07-09 13:05 ` [PATCH v22 2/4] random: introduce generic vDSO getrandom() implementation Jason A. Donenfeld
` (2 subsequent siblings)
3 siblings, 1 reply; 39+ messages in thread
From: Jason A. Donenfeld @ 2024-07-09 13:05 UTC (permalink / raw)
To: linux-kernel, patches, tglx
Cc: Jason A. Donenfeld, linux-crypto, linux-api, x86, Linus Torvalds,
Greg Kroah-Hartman, Adhemerval Zanella Netto, Carlos O'Donell,
Florian Weimer, Arnd Bergmann, Jann Horn, Christian Brauner,
David Hildenbrand, linux-mm
The vDSO getrandom() implementation works with a buffer allocated with a
new system call that has certain requirements:
- It shouldn't be written to core dumps.
* Easy: VM_DONTDUMP.
- It should be zeroed on fork.
* Easy: VM_WIPEONFORK.
- It shouldn't be written to swap.
* Uh-oh: mlock is rlimited.
* Uh-oh: mlock isn't inherited by forks.
It turns out that the vDSO getrandom() function has three really nice
characteristics that we can exploit to solve this problem:
1) Due to being wiped during fork(), the vDSO code is already robust to
having the contents of the pages it reads zeroed out midway through
the function's execution.
2) In the absolute worst case of whatever contingency we're coding for,
we have the option to fallback to the getrandom() syscall, and
everything is fine.
3) The buffers the function uses are only ever useful for a maximum of
60 seconds -- a sort of cache, rather than a long term allocation.
These characteristics mean that we can introduce VM_DROPPABLE, which
has the following semantics:
a) It never is written out to swap.
b) Under memory pressure, mm can just drop the pages (so that they're
zero when read back again).
c) It is inherited by fork.
d) It doesn't count against the mlock budget, since nothing is locked.
This is fairly simple to implement, with the one snag that we have to
use 64-bit VM_* flags, but this shouldn't be a problem, since the only
consumers will probably be 64-bit anyway.
This way, allocations used by vDSO getrandom() can use:
VM_DROPPABLE | VM_DONTDUMP | VM_WIPEONFORK | VM_NORESERVE
And there will be no problem with using memory when not in use, not
wiping on fork(), coredumps, or writing out to swap.
In order to let vDSO getrandom() use this, expose these via mmap(2) as
MAP_DROPPABLE.
Finally, the provided self test ensures that this is working as desired.
Cc: linux-mm@kvack.org
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
fs/proc/task_mmu.c | 1 +
include/linux/mm.h | 7 ++++
include/trace/events/mmflags.h | 7 ++++
include/uapi/linux/mman.h | 1 +
mm/madvise.c | 5 ++-
mm/mlock.c | 2 +-
mm/mmap.c | 30 +++++++++++++++
mm/rmap.c | 22 +++++++++--
tools/include/uapi/linux/mman.h | 1 +
tools/testing/selftests/mm/.gitignore | 1 +
tools/testing/selftests/mm/Makefile | 1 +
tools/testing/selftests/mm/droppable.c | 53 ++++++++++++++++++++++++++
12 files changed, 126 insertions(+), 5 deletions(-)
create mode 100644 tools/testing/selftests/mm/droppable.c
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 71e5039d940d..46f0b0fe9ee3 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -708,6 +708,7 @@ static void show_smap_vma_flags(struct seq_file *m, struct vm_area_struct *vma)
[ilog2(VM_SHADOW_STACK)] = "ss",
#endif
#ifdef CONFIG_64BIT
+ [ilog2(VM_DROPPABLE)] = "dp",
[ilog2(VM_SEALED)] = "sl",
#endif
};
diff --git a/include/linux/mm.h b/include/linux/mm.h
index eb7c96d24ac0..e078c2890bf8 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -406,6 +406,13 @@ extern unsigned int kobjsize(const void *objp);
#define VM_ALLOW_ANY_UNCACHED VM_NONE
#endif
+#ifdef CONFIG_64BIT
+#define VM_DROPPABLE_BIT 40
+#define VM_DROPPABLE BIT(VM_DROPPABLE_BIT)
+#else
+#define VM_DROPPABLE VM_NONE
+#endif
+
#ifdef CONFIG_64BIT
/* VM is sealed, in vm_flags */
#define VM_SEALED _BITUL(63)
diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h
index e46d6e82765e..b63d211bd141 100644
--- a/include/trace/events/mmflags.h
+++ b/include/trace/events/mmflags.h
@@ -165,6 +165,12 @@ IF_HAVE_PG_ARCH_X(arch_3)
# define IF_HAVE_UFFD_MINOR(flag, name)
#endif
+#ifdef CONFIG_64BIT
+# define IF_HAVE_VM_DROPPABLE(flag, name) {flag, name},
+#else
+# define IF_HAVE_VM_DROPPABLE(flag, name)
+#endif
+
#define __def_vmaflag_names \
{VM_READ, "read" }, \
{VM_WRITE, "write" }, \
@@ -197,6 +203,7 @@ IF_HAVE_VM_SOFTDIRTY(VM_SOFTDIRTY, "softdirty" ) \
{VM_MIXEDMAP, "mixedmap" }, \
{VM_HUGEPAGE, "hugepage" }, \
{VM_NOHUGEPAGE, "nohugepage" }, \
+IF_HAVE_VM_DROPPABLE(VM_DROPPABLE, "droppable" ) \
{VM_MERGEABLE, "mergeable" } \
#define show_vma_flags(flags) \
diff --git a/include/uapi/linux/mman.h b/include/uapi/linux/mman.h
index a246e11988d5..e89d00528f2f 100644
--- a/include/uapi/linux/mman.h
+++ b/include/uapi/linux/mman.h
@@ -17,6 +17,7 @@
#define MAP_SHARED 0x01 /* Share changes */
#define MAP_PRIVATE 0x02 /* Changes are private */
#define MAP_SHARED_VALIDATE 0x03 /* share + validate extension flags */
+#define MAP_DROPPABLE 0x08 /* Zero memory under memory pressure. */
/*
* Huge page size encoding when MAP_HUGETLB is specified, and a huge page
diff --git a/mm/madvise.c b/mm/madvise.c
index a77893462b92..cba5bc652fc4 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -1068,13 +1068,16 @@ static int madvise_vma_behavior(struct vm_area_struct *vma,
new_flags |= VM_WIPEONFORK;
break;
case MADV_KEEPONFORK:
+ if (vma->vm_flags & VM_DROPPABLE)
+ return -EINVAL;
new_flags &= ~VM_WIPEONFORK;
break;
case MADV_DONTDUMP:
new_flags |= VM_DONTDUMP;
break;
case MADV_DODUMP:
- if (!is_vm_hugetlb_page(vma) && new_flags & VM_SPECIAL)
+ if ((!is_vm_hugetlb_page(vma) && new_flags & VM_SPECIAL) ||
+ (vma->vm_flags & VM_DROPPABLE))
return -EINVAL;
new_flags &= ~VM_DONTDUMP;
break;
diff --git a/mm/mlock.c b/mm/mlock.c
index 30b51cdea89d..b87b3d8cc9cc 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -485,7 +485,7 @@ static int mlock_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
if (newflags == oldflags || (oldflags & VM_SPECIAL) ||
is_vm_hugetlb_page(vma) || vma == get_gate_vma(current->mm) ||
- vma_is_dax(vma) || vma_is_secretmem(vma))
+ vma_is_dax(vma) || vma_is_secretmem(vma) || (oldflags & VM_DROPPABLE))
/* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */
goto out;
diff --git a/mm/mmap.c b/mm/mmap.c
index 83b4682ec85c..8aeedeb784c2 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1369,6 +1369,36 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
pgoff = 0;
vm_flags |= VM_SHARED | VM_MAYSHARE;
break;
+ case MAP_DROPPABLE:
+ if (VM_DROPPABLE == VM_NONE)
+ return -ENOTSUPP;
+ /*
+ * A locked or stack area makes no sense to be droppable.
+ *
+ * Also, since droppable pages can just go away at any time
+ * it makes no sense to copy them on fork or dump them.
+ *
+ * And don't attempt to combine with hugetlb for now.
+ */
+ if (flags & (MAP_LOCKED | MAP_HUGETLB))
+ return -EINVAL;
+ if (vm_flags & (VM_GROWSDOWN | VM_GROWSUP))
+ return -EINVAL;
+
+ vm_flags |= VM_DROPPABLE;
+
+ /*
+ * If the pages can be dropped, then it doesn't make
+ * sense to reserve them.
+ */
+ vm_flags |= VM_NORESERVE;
+
+ /*
+ * Likewise, they're volatile enough that they
+ * shouldn't survive forks or coredumps.
+ */
+ vm_flags |= VM_WIPEONFORK | VM_DONTDUMP;
+ fallthrough;
case MAP_PRIVATE:
/*
* Set pgoff according to addr for anon_vma.
diff --git a/mm/rmap.c b/mm/rmap.c
index e8fc5ecb59b2..1f9b5a9cb121 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1397,7 +1397,12 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio);
VM_BUG_ON_VMA(address < vma->vm_start ||
address + (nr << PAGE_SHIFT) > vma->vm_end, vma);
- __folio_set_swapbacked(folio);
+ /*
+ * VM_DROPPABLE mappings don't swap; instead they're just dropped when
+ * under memory pressure.
+ */
+ if (!(vma->vm_flags & VM_DROPPABLE))
+ __folio_set_swapbacked(folio);
__folio_set_anon(folio, vma, address, true);
if (likely(!folio_test_large(folio))) {
@@ -1841,7 +1846,13 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
* plus the rmap(s) (dropped by discard:).
*/
if (ref_count == 1 + map_count &&
- !folio_test_dirty(folio)) {
+ (!folio_test_dirty(folio) ||
+ /*
+ * Unlike MADV_FREE mappings, VM_DROPPABLE
+ * ones can be dropped even if they've
+ * been dirtied.
+ */
+ (vma->vm_flags & VM_DROPPABLE))) {
dec_mm_counter(mm, MM_ANONPAGES);
goto discard;
}
@@ -1851,7 +1862,12 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
* discarded. Remap the page to page table.
*/
set_pte_at(mm, address, pvmw.pte, pteval);
- folio_set_swapbacked(folio);
+ /*
+ * Unlike MADV_FREE mappings, VM_DROPPABLE ones
+ * never get swap backed on failure to drop.
+ */
+ if (!(vma->vm_flags & VM_DROPPABLE))
+ folio_set_swapbacked(folio);
ret = false;
page_vma_mapped_walk_done(&pvmw);
break;
diff --git a/tools/include/uapi/linux/mman.h b/tools/include/uapi/linux/mman.h
index a246e11988d5..e89d00528f2f 100644
--- a/tools/include/uapi/linux/mman.h
+++ b/tools/include/uapi/linux/mman.h
@@ -17,6 +17,7 @@
#define MAP_SHARED 0x01 /* Share changes */
#define MAP_PRIVATE 0x02 /* Changes are private */
#define MAP_SHARED_VALIDATE 0x03 /* share + validate extension flags */
+#define MAP_DROPPABLE 0x08 /* Zero memory under memory pressure. */
/*
* Huge page size encoding when MAP_HUGETLB is specified, and a huge page
diff --git a/tools/testing/selftests/mm/.gitignore b/tools/testing/selftests/mm/.gitignore
index 0b9ab987601c..a8beeb43c2b5 100644
--- a/tools/testing/selftests/mm/.gitignore
+++ b/tools/testing/selftests/mm/.gitignore
@@ -49,3 +49,4 @@ hugetlb_fault_after_madv
hugetlb_madv_vs_map
mseal_test
seal_elf
+droppable
diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile
index 3b49bc3d0a3b..e3e5740e13e1 100644
--- a/tools/testing/selftests/mm/Makefile
+++ b/tools/testing/selftests/mm/Makefile
@@ -73,6 +73,7 @@ TEST_GEN_FILES += ksm_functional_tests
TEST_GEN_FILES += mdwe_test
TEST_GEN_FILES += hugetlb_fault_after_madv
TEST_GEN_FILES += hugetlb_madv_vs_map
+TEST_GEN_FILES += droppable
ifneq ($(ARCH),arm64)
TEST_GEN_FILES += soft-dirty
diff --git a/tools/testing/selftests/mm/droppable.c b/tools/testing/selftests/mm/droppable.c
new file mode 100644
index 000000000000..f3d9ecf96890
--- /dev/null
+++ b/tools/testing/selftests/mm/droppable.c
@@ -0,0 +1,53 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2024 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ */
+
+#include <assert.h>
+#include <stdbool.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <signal.h>
+#include <sys/mman.h>
+#include <linux/mman.h>
+
+#include "../kselftest.h"
+
+int main(int argc, char *argv[])
+{
+ size_t alloc_size = 134217728;
+ size_t page_size = getpagesize();
+ void *alloc;
+ pid_t child;
+
+ ksft_print_header();
+ ksft_set_plan(1);
+
+ alloc = mmap(0, alloc_size, PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_DROPPABLE, -1, 0);
+ assert(alloc != MAP_FAILED);
+ memset(alloc, 'A', alloc_size);
+ for (size_t i = 0; i < alloc_size; i += page_size)
+ assert(*(uint8_t *)(alloc + i));
+
+ child = fork();
+ assert(child >= 0);
+ if (!child) {
+ for (;;)
+ *(char *)malloc(page_size) = 'B';
+ }
+
+ for (bool done = false; !done;) {
+ for (size_t i = 0; i < alloc_size; i += page_size) {
+ if (!*(uint8_t *)(alloc + i)) {
+ done = true;
+ break;
+ }
+ }
+ }
+ kill(child, SIGTERM);
+
+ ksft_test_result_pass("MAP_DROPPABLE: PASS\n");
+ exit(KSFT_PASS);
+}
--
2.45.2
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH v22 2/4] random: introduce generic vDSO getrandom() implementation
2024-07-09 13:05 [PATCH v22 0/4] implement getrandom() in vDSO Jason A. Donenfeld
2024-07-09 13:05 ` [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings Jason A. Donenfeld
@ 2024-07-09 13:05 ` Jason A. Donenfeld
2024-07-09 13:05 ` [PATCH v22 3/4] x86: vdso: Wire up getrandom() vDSO implementation Jason A. Donenfeld
2024-07-09 13:05 ` [PATCH v22 4/4] selftests/vDSO: add tests for vgetrandom Jason A. Donenfeld
3 siblings, 0 replies; 39+ messages in thread
From: Jason A. Donenfeld @ 2024-07-09 13:05 UTC (permalink / raw)
To: linux-kernel, patches, tglx
Cc: Jason A. Donenfeld, linux-crypto, linux-api, x86, Linus Torvalds,
Greg Kroah-Hartman, Adhemerval Zanella Netto, Carlos O'Donell,
Florian Weimer, Arnd Bergmann, Jann Horn, Christian Brauner,
David Hildenbrand
Provide a generic C vDSO getrandom() implementation, which operates on
an opaque state returned by vgetrandom_alloc() and produces random bytes
the same way as getrandom(). This has the following API signature:
ssize_t vgetrandom(void *buffer, size_t len, unsigned int flags,
void *opaque_state, size_t opaque_len);
The return value and the first three arguments are the same as ordinary
getrandom(), while the last two arguments are a pointer to the opaque
allocated state and its size. Were all five arguments passed to the
getrandom() syscall, nothing different would happen, and the functions
would have the exact same behavior.
The actual vDSO RNG algorithm implemented is the same one implemented by
drivers/char/random.c, using the same fast-erasure techniques as that.
Should the in-kernel implementation change, so too will the vDSO one.
It requires an implementation of ChaCha20 that does not use any stack,
in order to maintain forward secrecy if a multi-threaded program forks
(though this does not account for a similar issue with SA_SIGINFO
copying registers to the stack), so this is left as an
architecture-specific fill-in. Stack-less ChaCha20 is an easy algorithm
to implement on a variety of architectures, so this shouldn't be too
onerous.
Initially, the state is keyless, and so the first call makes a
getrandom() syscall to generate that key, and then uses it for
subsequent calls. By keeping track of a generation counter, it knows
when its key is invalidated and it should fetch a new one using the
syscall. Later, more than just a generation counter might be used.
Since MADV_WIPEONFORK is set on the opaque state, the key and related
state is wiped during a fork(), so secrets don't roll over into new
processes, and the same state doesn't accidentally generate the same
random stream. The generation counter, as well, is always >0, so that
the 0 counter is a useful indication of a fork() or otherwise
uninitialized state.
If the kernel RNG is not yet initialized, then the vDSO always calls the
syscall, because that behavior cannot be emulated in userspace, but
fortunately that state is short lived and only during early boot. If it
has been initialized, then there is no need to inspect the `flags`
argument, because the behavior does not change post-initialization
regardless of the `flags` value.
Since the opaque state passed to it is mutated, vDSO getrandom() is not
reentrant, when used with the same opaque state, which libc should be
mindful of.
The function works over an opaque per-thread state of a particular size,
which must be marked VM_WIPEONFORK, VM_DONTDUMP, VM_NORESERVE, and
VM_DROPPABLE for proper operation. Over time, the nuances of these
allocations may change or grow or even differ based on architectural
features.
The opaque state passed to vDSO getrandom() must be allocated using the
mmap_flags and mmap_prot parameters provided by the vgetrandom_opaque_params
struct, which also contains the size of each state. That struct can be
obtained with a call to vgetrandom(NULL, 0, 0, ¶ms, ~0UL). Then,
libc can call mmap(2) and slice up the returned array into a state per
each thread, while ensuring that no single state straddles a page
boundary. Libc is expected to allocate a chunk of these on first use,
and then dole them out to threads as they're created, allocating more
when needed.
vDSO getrandom() provides the ability for userspace to generate random
bytes quickly and safely, and is intended to be integrated into libc's
thread management. As an illustrative example, the introduced code in
the vdso_test_getrandom self test later in this series might be used to
do the same outside of libc. In a libc the various pthread-isms are
expected to be elided into libc internals.
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
MAINTAINERS | 2 +
drivers/char/random.c | 18 ++-
include/uapi/linux/random.h | 15 +++
include/vdso/datapage.h | 11 ++
include/vdso/getrandom.h | 46 +++++++
lib/vdso/Kconfig | 5 +
lib/vdso/getrandom.c | 251 ++++++++++++++++++++++++++++++++++++
7 files changed, 347 insertions(+), 1 deletion(-)
create mode 100644 include/vdso/getrandom.h
create mode 100644 lib/vdso/getrandom.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 3c4fdf74a3f9..798158329ad8 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -18747,6 +18747,8 @@ T: git https://git.kernel.org/pub/scm/linux/kernel/git/crng/random.git
F: Documentation/devicetree/bindings/rng/microsoft,vmgenid.yaml
F: drivers/char/random.c
F: drivers/virt/vmgenid.c
+F: include/vdso/getrandom.h
+F: lib/vdso/getrandom.c
RAPIDIO SUBSYSTEM
M: Matt Porter <mporter@kernel.crashing.org>
diff --git a/drivers/char/random.c b/drivers/char/random.c
index 2597cb43f438..b02a12436750 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause)
/*
- * Copyright (C) 2017-2022 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ * Copyright (C) 2017-2024 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
* Copyright Matt Mackall <mpm@selenic.com>, 2003, 2004, 2005
* Copyright Theodore Ts'o, 1994, 1995, 1996, 1997, 1998, 1999. All rights reserved.
*
@@ -56,6 +56,10 @@
#include <linux/sched/isolation.h>
#include <crypto/chacha.h>
#include <crypto/blake2s.h>
+#ifdef CONFIG_VDSO_GETRANDOM
+#include <vdso/getrandom.h>
+#include <vdso/datapage.h>
+#endif
#include <asm/archrandom.h>
#include <asm/processor.h>
#include <asm/irq.h>
@@ -271,6 +275,15 @@ static void crng_reseed(struct work_struct *work)
if (next_gen == ULONG_MAX)
++next_gen;
WRITE_ONCE(base_crng.generation, next_gen);
+#ifdef CONFIG_VDSO_GETRANDOM
+ /* base_crng.generation's invalid value is ULONG_MAX, while
+ * _vdso_rng_data.generation's invalid value is 0, so add one to the
+ * former to arrive at the latter. Use smp_store_release so that this
+ * is ordered with the write above to base_crng.generation. Pairs with
+ * the smp_rmb() before the syscall in the vDSO code.
+ */
+ smp_store_release(&_vdso_rng_data.generation, next_gen + 1);
+#endif
if (!static_branch_likely(&crng_is_ready))
crng_init = CRNG_READY;
spin_unlock_irqrestore(&base_crng.lock, flags);
@@ -721,6 +734,9 @@ static void __cold _credit_init_bits(size_t bits)
if (static_key_initialized && system_unbound_wq)
queue_work(system_unbound_wq, &set_ready);
atomic_notifier_call_chain(&random_ready_notifier, 0, NULL);
+#ifdef CONFIG_VDSO_GETRANDOM
+ WRITE_ONCE(_vdso_rng_data.is_ready, true);
+#endif
wake_up_interruptible(&crng_init_wait);
kill_fasync(&fasync, SIGIO, POLL_IN);
pr_notice("crng init done\n");
diff --git a/include/uapi/linux/random.h b/include/uapi/linux/random.h
index e744c23582eb..2a3fe4c2cdc9 100644
--- a/include/uapi/linux/random.h
+++ b/include/uapi/linux/random.h
@@ -55,4 +55,19 @@ struct rand_pool_info {
#define GRND_RANDOM 0x0002
#define GRND_INSECURE 0x0004
+/**
+ * struct vgetrandom_opaque_params - arguments for allocating memory for vgetrandom
+ *
+ * @size_per_opaque_state: Size of each state that is to be passed to vgetrandom().
+ * @mmap_prot: Value of the prot argument in mmap(2).
+ * @mmap_flags: Value of the flags argument in mmap(2).
+ * @reserved: Reserved for future use.
+ */
+struct vgetrandom_opaque_params {
+ __u32 size_of_opaque_state;
+ __u32 mmap_prot;
+ __u32 mmap_flags;
+ __u32 reserved[13];
+};
+
#endif /* _UAPI_LINUX_RANDOM_H */
diff --git a/include/vdso/datapage.h b/include/vdso/datapage.h
index d04d394db064..05e5787beb73 100644
--- a/include/vdso/datapage.h
+++ b/include/vdso/datapage.h
@@ -113,6 +113,16 @@ struct vdso_data {
struct arch_vdso_data arch_data;
};
+/**
+ * struct vdso_rng_data - vdso RNG state information
+ * @generation: counter representing the number of RNG reseeds
+ * @is_ready: boolean signaling whether the RNG is initialized
+ */
+struct vdso_rng_data {
+ u64 generation;
+ u8 is_ready;
+};
+
/*
* We use the hidden visibility to prevent the compiler from generating a GOT
* relocation. Not only is going through a GOT useless (the entry couldn't and
@@ -124,6 +134,7 @@ struct vdso_data {
*/
extern struct vdso_data _vdso_data[CS_BASES] __attribute__((visibility("hidden")));
extern struct vdso_data _timens_data[CS_BASES] __attribute__((visibility("hidden")));
+extern struct vdso_rng_data _vdso_rng_data __attribute__((visibility("hidden")));
/**
* union vdso_data_store - Generic vDSO data page
diff --git a/include/vdso/getrandom.h b/include/vdso/getrandom.h
new file mode 100644
index 000000000000..a8b7c14b0ae0
--- /dev/null
+++ b/include/vdso/getrandom.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2022-2024 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ */
+
+#ifndef _VDSO_GETRANDOM_H
+#define _VDSO_GETRANDOM_H
+
+#include <linux/types.h>
+
+#define CHACHA_KEY_SIZE 32
+#define CHACHA_BLOCK_SIZE 64
+
+/**
+ * struct vgetrandom_state - State used by vDSO getrandom().
+ *
+ * @batch: One and a half ChaCha20 blocks of buffered RNG output.
+ *
+ * @key: Key to be used for generating next batch.
+ *
+ * @batch_key: Union of the prior two members, which is exactly two full
+ * ChaCha20 blocks in size, so that @batch and @key can be filled
+ * together.
+ *
+ * @generation: Snapshot of @rng_info->generation in the vDSO data page at
+ * the time @key was generated.
+ *
+ * @pos: Offset into @batch of the next available random byte.
+ *
+ * @in_use: Reentrancy guard for reusing a state within the same thread
+ * due to signal handlers.
+ */
+struct vgetrandom_state {
+ union {
+ struct {
+ u8 batch[CHACHA_BLOCK_SIZE * 3 / 2];
+ u32 key[CHACHA_KEY_SIZE / sizeof(u32)];
+ };
+ u8 batch_key[CHACHA_BLOCK_SIZE * 2];
+ };
+ u64 generation;
+ u8 pos;
+ bool in_use;
+};
+
+#endif /* _VDSO_GETRANDOM_H */
diff --git a/lib/vdso/Kconfig b/lib/vdso/Kconfig
index c46c2300517c..82fe827af542 100644
--- a/lib/vdso/Kconfig
+++ b/lib/vdso/Kconfig
@@ -38,3 +38,8 @@ config GENERIC_VDSO_OVERFLOW_PROTECT
in the hotpath.
endif
+
+config VDSO_GETRANDOM
+ bool
+ help
+ Selected by architectures that support vDSO getrandom().
diff --git a/lib/vdso/getrandom.c b/lib/vdso/getrandom.c
new file mode 100644
index 000000000000..b230f0b10832
--- /dev/null
+++ b/lib/vdso/getrandom.c
@@ -0,0 +1,251 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2022-2024 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ */
+
+#include <linux/cache.h>
+#include <linux/kernel.h>
+#include <linux/time64.h>
+#include <vdso/datapage.h>
+#include <vdso/getrandom.h>
+#include <asm/vdso/getrandom.h>
+#include <asm/vdso/vsyscall.h>
+#include <asm/unaligned.h>
+#include <uapi/linux/mman.h>
+
+#define MEMCPY_AND_ZERO_SRC(type, dst, src, len) do { \
+ while (len >= sizeof(type)) { \
+ __put_unaligned_t(type, __get_unaligned_t(type, src), dst); \
+ __put_unaligned_t(type, 0, src); \
+ dst += sizeof(type); \
+ src += sizeof(type); \
+ len -= sizeof(type); \
+ } \
+} while (0)
+
+static void memcpy_and_zero_src(void *dst, void *src, size_t len)
+{
+ if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) {
+ if (IS_ENABLED(CONFIG_64BIT))
+ MEMCPY_AND_ZERO_SRC(u64, dst, src, len);
+ MEMCPY_AND_ZERO_SRC(u32, dst, src, len);
+ MEMCPY_AND_ZERO_SRC(u16, dst, src, len);
+ }
+ MEMCPY_AND_ZERO_SRC(u8, dst, src, len);
+}
+
+/**
+ * __cvdso_getrandom_data - Generic vDSO implementation of getrandom() syscall.
+ * @rng_info: Describes state of kernel RNG, memory shared with kernel.
+ * @buffer: Destination buffer to fill with random bytes.
+ * @len: Size of @buffer in bytes.
+ * @flags: Zero or more GRND_* flags.
+ * @opaque_state: Pointer to an opaque state area.
+ * @opaque_len: Length of opaque state area.
+ *
+ * This implements a "fast key erasure" RNG using ChaCha20, in the same way that the kernel's
+ * getrandom() syscall does. It periodically reseeds its key from the kernel's RNG, at the same
+ * schedule that the kernel's RNG is reseeded. If the kernel's RNG is not ready, then this always
+ * calls into the syscall.
+ *
+ * If @buffer, @len, and @flags are 0, and @opaque_len is ~0UL, then @opaque_state is populated
+ * with a struct vgetrandom_opaque_params and the function returns 0; if it does not return 0,
+ * this function should not be used.
+ *
+ * @opaque_state *must* be allocated by calling mmap(2) using the mmap_prot and mmap_flags fields
+ * from the struct vgetrandom_opaque_params, and states must not straddle pages. Unless external
+ * locking is used, one state must be allocated per thread, as it is not safe to call this function
+ * concurrently with the same @opaque_state. However, it is safe to call this using the same
+ * @opaque_state that is shared between main code and signal handling code, within the same thread.
+ *
+ * Returns: The number of random bytes written to @buffer, or a negative value indicating an error.
+ */
+static __always_inline ssize_t
+__cvdso_getrandom_data(const struct vdso_rng_data *rng_info, void *buffer, size_t len,
+ unsigned int flags, void *opaque_state, size_t opaque_len)
+{
+ ssize_t ret = min_t(size_t, INT_MAX & PAGE_MASK /* = MAX_RW_COUNT */, len);
+ struct vgetrandom_state *state = opaque_state;
+ size_t batch_len, nblocks, orig_len = len;
+ bool in_use, have_retried = false;
+ unsigned long current_generation;
+ void *orig_buffer = buffer;
+ u32 counter[2] = { 0 };
+
+ if (unlikely(opaque_len == ~0UL && !buffer && !len && !flags)) {
+ *(struct vgetrandom_opaque_params *)opaque_state = (struct vgetrandom_opaque_params) {
+ .size_of_opaque_state = sizeof(*state),
+ .mmap_prot = PROT_READ | PROT_WRITE,
+ .mmap_flags = MAP_DROPPABLE | MAP_ANONYMOUS
+ };
+ return 0;
+ }
+
+ /* The state must not straddle a page, since pages can be zeroed at any time. */
+ if (unlikely(((unsigned long)opaque_state & ~PAGE_MASK) + sizeof(*state) > PAGE_SIZE))
+ return -EFAULT;
+
+ /* If the caller passes the wrong size, which might happen due to CRIU, fallback. */
+ if (unlikely(opaque_len != sizeof(*state)))
+ goto fallback_syscall;
+
+ /*
+ * If the kernel's RNG is not yet ready, then it's not possible to provide random bytes from
+ * userspace, because A) the various @flags require this to block, or not, depending on
+ * various factors unavailable to userspace, and B) the kernel's behavior before the RNG is
+ * ready is to reseed from the entropy pool at every invocation.
+ */
+ if (unlikely(!READ_ONCE(rng_info->is_ready)))
+ goto fallback_syscall;
+
+ /*
+ * This condition is checked after @rng_info->is_ready, because before the kernel's RNG is
+ * initialized, the @flags parameter may require this to block or return an error, even when
+ * len is zero.
+ */
+ if (unlikely(!len))
+ return 0;
+
+ /*
+ * @state->in_use is basic reentrancy protection against this running in a signal handler
+ * with the same @opaque_state, but obviously not atomic wrt multiple CPUs or more than one
+ * level of reentrancy. If a signal interrupts this after reading @state->in_use, but before
+ * writing @state->in_use, there is still no race, because the signal handler will run to
+ * its completion before returning execution.
+ */
+ in_use = READ_ONCE(state->in_use);
+ if (unlikely(in_use))
+ /* The syscall simply fills the buffer and does not touch @state, so fallback. */
+ goto fallback_syscall;
+ WRITE_ONCE(state->in_use, true);
+
+retry_generation:
+ /*
+ * @rng_info->generation must always be read here, as it serializes @state->key with the
+ * kernel's RNG reseeding schedule.
+ */
+ current_generation = READ_ONCE(rng_info->generation);
+
+ /*
+ * If @state->generation doesn't match the kernel RNG's generation, then it means the
+ * kernel's RNG has reseeded, and so @state->key is reseeded as well.
+ */
+ if (unlikely(state->generation != current_generation)) {
+ /*
+ * Write the generation before filling the key, in case of fork. If there is a fork
+ * just after this line, the parent and child will get different random bytes from
+ * the syscall, which is good. However, were this line to occur after the getrandom
+ * syscall, then both child and parent could have the same bytes and the same
+ * generation counter, so the fork would not be detected. Therefore, write
+ * @state->generation before the call to the getrandom syscall.
+ */
+ WRITE_ONCE(state->generation, current_generation);
+
+ /*
+ * Prevent the syscall from being reordered wrt current_generation. Pairs with the
+ * smp_store_release(&_vdso_rng_data.generation) in random.c.
+ */
+ smp_rmb();
+
+ /* Reseed @state->key using fresh bytes from the kernel. */
+ if (getrandom_syscall(state->key, sizeof(state->key), 0) != sizeof(state->key)) {
+ /*
+ * If the syscall failed to refresh the key, then @state->key is now
+ * invalid, so invalidate the generation so that it is not used again, and
+ * fallback to using the syscall entirely.
+ */
+ WRITE_ONCE(state->generation, 0);
+
+ /*
+ * Set @state->in_use to false only after the last write to @state in the
+ * line above.
+ */
+ WRITE_ONCE(state->in_use, false);
+
+ goto fallback_syscall;
+ }
+
+ /*
+ * Set @state->pos to beyond the end of the batch, so that the batch is refilled
+ * using the new key.
+ */
+ state->pos = sizeof(state->batch);
+ }
+
+ /* Set len to the total amount of bytes that this function is allowed to read, ret. */
+ len = ret;
+more_batch:
+ /*
+ * First use bytes out of @state->batch, which may have been filled by the last call to this
+ * function.
+ */
+ batch_len = min_t(size_t, sizeof(state->batch) - state->pos, len);
+ if (batch_len) {
+ /* Zeroing at the same time as memcpying helps preserve forward secrecy. */
+ memcpy_and_zero_src(buffer, state->batch + state->pos, batch_len);
+ state->pos += batch_len;
+ buffer += batch_len;
+ len -= batch_len;
+ }
+
+ if (!len) {
+ /* Prevent the loop from being reordered wrt ->generation. */
+ barrier();
+
+ /*
+ * Since @rng_info->generation will never be 0, re-read @state->generation, rather
+ * than using the local current_generation variable, to learn whether a fork
+ * occurred or if @state was zeroed due to memory pressure. Primarily, though, this
+ * indicates whether the kernel's RNG has reseeded, in which case generate a new key
+ * and start over.
+ */
+ if (unlikely(READ_ONCE(state->generation) != READ_ONCE(rng_info->generation))) {
+ /*
+ * Prevent this from looping forever in case of low memory or racing with a
+ * user force-reseeding the kernel's RNG using the ioctl.
+ */
+ if (have_retried) {
+ WRITE_ONCE(state->in_use, false);
+ goto fallback_syscall;
+ }
+
+ have_retried = true;
+ buffer = orig_buffer;
+ goto retry_generation;
+ }
+
+ /*
+ * Set @state->in_use to false only when there will be no more reads or writes of
+ * @state.
+ */
+ WRITE_ONCE(state->in_use, false);
+ return ret;
+ }
+
+ /* Generate blocks of RNG output directly into @buffer while there's enough room left. */
+ nblocks = len / CHACHA_BLOCK_SIZE;
+ if (nblocks) {
+ __arch_chacha20_blocks_nostack(buffer, state->key, counter, nblocks);
+ buffer += nblocks * CHACHA_BLOCK_SIZE;
+ len -= nblocks * CHACHA_BLOCK_SIZE;
+ }
+
+ BUILD_BUG_ON(sizeof(state->batch_key) % CHACHA_BLOCK_SIZE != 0);
+
+ /* Refill the batch and overwrite the key, in order to preserve forward secrecy. */
+ __arch_chacha20_blocks_nostack(state->batch_key, state->key, counter,
+ sizeof(state->batch_key) / CHACHA_BLOCK_SIZE);
+
+ /* Since the batch was just refilled, set the position back to 0 to indicate a full batch. */
+ state->pos = 0;
+ goto more_batch;
+
+fallback_syscall:
+ return getrandom_syscall(orig_buffer, orig_len, flags);
+}
+
+static __always_inline ssize_t
+__cvdso_getrandom(void *buffer, size_t len, unsigned int flags, void *opaque_state, size_t opaque_len)
+{
+ return __cvdso_getrandom_data(__arch_get_vdso_rng_data(), buffer, len, flags, opaque_state, opaque_len);
+}
--
2.45.2
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH v22 3/4] x86: vdso: Wire up getrandom() vDSO implementation
2024-07-09 13:05 [PATCH v22 0/4] implement getrandom() in vDSO Jason A. Donenfeld
2024-07-09 13:05 ` [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings Jason A. Donenfeld
2024-07-09 13:05 ` [PATCH v22 2/4] random: introduce generic vDSO getrandom() implementation Jason A. Donenfeld
@ 2024-07-09 13:05 ` Jason A. Donenfeld
2024-07-09 13:05 ` [PATCH v22 4/4] selftests/vDSO: add tests for vgetrandom Jason A. Donenfeld
3 siblings, 0 replies; 39+ messages in thread
From: Jason A. Donenfeld @ 2024-07-09 13:05 UTC (permalink / raw)
To: linux-kernel, patches, tglx
Cc: Jason A. Donenfeld, linux-crypto, linux-api, x86, Linus Torvalds,
Greg Kroah-Hartman, Adhemerval Zanella Netto, Carlos O'Donell,
Florian Weimer, Arnd Bergmann, Jann Horn, Christian Brauner,
David Hildenbrand, Samuel Neves
Hook up the generic vDSO implementation to the x86 vDSO data page. Since
the existing vDSO infrastructure is heavily based on the timekeeping
functionality, which works over arrays of bases, a new macro is
introduced for vvars that are not arrays.
The vDSO function requires a ChaCha20 implementation that does not write
to the stack, yet can still do an entire ChaCha20 permutation, so
provide this using SSE2, since this is userland code that must work on
all x86-64 processors.
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Samuel Neves <sneves@dei.uc.pt> # for vgetrandom-chacha.S
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
MAINTAINERS | 2 +
arch/x86/Kconfig | 1 +
arch/x86/entry/vdso/Makefile | 3 +-
arch/x86/entry/vdso/vdso.lds.S | 2 +
arch/x86/entry/vdso/vgetrandom-chacha.S | 178 ++++++++++++++++++++++++
arch/x86/entry/vdso/vgetrandom.c | 17 +++
arch/x86/include/asm/vdso/getrandom.h | 55 ++++++++
arch/x86/include/asm/vdso/vsyscall.h | 2 +
arch/x86/include/asm/vvar.h | 16 +++
9 files changed, 275 insertions(+), 1 deletion(-)
create mode 100644 arch/x86/entry/vdso/vgetrandom-chacha.S
create mode 100644 arch/x86/entry/vdso/vgetrandom.c
create mode 100644 arch/x86/include/asm/vdso/getrandom.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 798158329ad8..00cf0362482b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -18749,6 +18749,8 @@ F: drivers/char/random.c
F: drivers/virt/vmgenid.c
F: include/vdso/getrandom.h
F: lib/vdso/getrandom.c
+F: arch/x86/entry/vdso/vgetrandom*
+F: arch/x86/include/asm/vdso/getrandom*
RAPIDIO SUBSYSTEM
M: Matt Porter <mporter@kernel.crashing.org>
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 1d7122a1883e..9c98b7a88cc2 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -287,6 +287,7 @@ config X86
select HAVE_UNSTABLE_SCHED_CLOCK
select HAVE_USER_RETURN_NOTIFIER
select HAVE_GENERIC_VDSO
+ select VDSO_GETRANDOM if X86_64
select HOTPLUG_PARALLEL if SMP && X86_64
select HOTPLUG_SMT if SMP
select HOTPLUG_SPLIT_STARTUP if SMP && X86_32
diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
index 215a1b202a91..c9216ac4fb1e 100644
--- a/arch/x86/entry/vdso/Makefile
+++ b/arch/x86/entry/vdso/Makefile
@@ -7,7 +7,7 @@
include $(srctree)/lib/vdso/Makefile
# Files to link into the vDSO:
-vobjs-y := vdso-note.o vclock_gettime.o vgetcpu.o
+vobjs-y := vdso-note.o vclock_gettime.o vgetcpu.o vgetrandom.o vgetrandom-chacha.o
vobjs32-y := vdso32/note.o vdso32/system_call.o vdso32/sigreturn.o
vobjs32-y += vdso32/vclock_gettime.o vdso32/vgetcpu.o
vobjs-$(CONFIG_X86_SGX) += vsgx.o
@@ -73,6 +73,7 @@ CFLAGS_REMOVE_vdso32/vclock_gettime.o = -pg
CFLAGS_REMOVE_vgetcpu.o = -pg
CFLAGS_REMOVE_vdso32/vgetcpu.o = -pg
CFLAGS_REMOVE_vsgx.o = -pg
+CFLAGS_REMOVE_vgetrandom.o = -pg
#
# X32 processes use x32 vDSO to access 64bit kernel data.
diff --git a/arch/x86/entry/vdso/vdso.lds.S b/arch/x86/entry/vdso/vdso.lds.S
index e8c60ae7a7c8..0bab5f4af6d1 100644
--- a/arch/x86/entry/vdso/vdso.lds.S
+++ b/arch/x86/entry/vdso/vdso.lds.S
@@ -30,6 +30,8 @@ VERSION {
#ifdef CONFIG_X86_SGX
__vdso_sgx_enter_enclave;
#endif
+ getrandom;
+ __vdso_getrandom;
local: *;
};
}
diff --git a/arch/x86/entry/vdso/vgetrandom-chacha.S b/arch/x86/entry/vdso/vgetrandom-chacha.S
new file mode 100644
index 000000000000..bcba5639b8ee
--- /dev/null
+++ b/arch/x86/entry/vdso/vgetrandom-chacha.S
@@ -0,0 +1,178 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2022-2024 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ */
+
+#include <linux/linkage.h>
+#include <asm/frame.h>
+
+.section .rodata, "a"
+.align 16
+CONSTANTS: .octa 0x6b20657479622d323320646e61707865
+.text
+
+/*
+ * Very basic SSE2 implementation of ChaCha20. Produces a given positive number
+ * of blocks of output with a nonce of 0, taking an input key and 8-byte
+ * counter. Importantly does not spill to the stack. Its arguments are:
+ *
+ * rdi: output bytes
+ * rsi: 32-byte key input
+ * rdx: 8-byte counter input/output
+ * rcx: number of 64-byte blocks to write to output
+ */
+SYM_FUNC_START(__arch_chacha20_blocks_nostack)
+
+.set output, %rdi
+.set key, %rsi
+.set counter, %rdx
+.set nblocks, %rcx
+.set i, %al
+/* xmm registers are *not* callee-save. */
+.set temp, %xmm0
+.set state0, %xmm1
+.set state1, %xmm2
+.set state2, %xmm3
+.set state3, %xmm4
+.set copy0, %xmm5
+.set copy1, %xmm6
+.set copy2, %xmm7
+.set copy3, %xmm8
+.set one, %xmm9
+
+ /* copy0 = "expand 32-byte k" */
+ movaps CONSTANTS(%rip),copy0
+ /* copy1,copy2 = key */
+ movups 0x00(key),copy1
+ movups 0x10(key),copy2
+ /* copy3 = counter || zero nonce */
+ movq 0x00(counter),copy3
+ /* one = 1 || 0 */
+ movq $1,%rax
+ movq %rax,one
+
+.Lblock:
+ /* state0,state1,state2,state3 = copy0,copy1,copy2,copy3 */
+ movdqa copy0,state0
+ movdqa copy1,state1
+ movdqa copy2,state2
+ movdqa copy3,state3
+
+ movb $10,i
+.Lpermute:
+ /* state0 += state1, state3 = rotl32(state3 ^ state0, 16) */
+ paddd state1,state0
+ pxor state0,state3
+ movdqa state3,temp
+ pslld $16,temp
+ psrld $16,state3
+ por temp,state3
+
+ /* state2 += state3, state1 = rotl32(state1 ^ state2, 12) */
+ paddd state3,state2
+ pxor state2,state1
+ movdqa state1,temp
+ pslld $12,temp
+ psrld $20,state1
+ por temp,state1
+
+ /* state0 += state1, state3 = rotl32(state3 ^ state0, 8) */
+ paddd state1,state0
+ pxor state0,state3
+ movdqa state3,temp
+ pslld $8,temp
+ psrld $24,state3
+ por temp,state3
+
+ /* state2 += state3, state1 = rotl32(state1 ^ state2, 7) */
+ paddd state3,state2
+ pxor state2,state1
+ movdqa state1,temp
+ pslld $7,temp
+ psrld $25,state1
+ por temp,state1
+
+ /* state1[0,1,2,3] = state1[1,2,3,0] */
+ pshufd $0x39,state1,state1
+ /* state2[0,1,2,3] = state2[2,3,0,1] */
+ pshufd $0x4e,state2,state2
+ /* state3[0,1,2,3] = state3[3,0,1,2] */
+ pshufd $0x93,state3,state3
+
+ /* state0 += state1, state3 = rotl32(state3 ^ state0, 16) */
+ paddd state1,state0
+ pxor state0,state3
+ movdqa state3,temp
+ pslld $16,temp
+ psrld $16,state3
+ por temp,state3
+
+ /* state2 += state3, state1 = rotl32(state1 ^ state2, 12) */
+ paddd state3,state2
+ pxor state2,state1
+ movdqa state1,temp
+ pslld $12,temp
+ psrld $20,state1
+ por temp,state1
+
+ /* state0 += state1, state3 = rotl32(state3 ^ state0, 8) */
+ paddd state1,state0
+ pxor state0,state3
+ movdqa state3,temp
+ pslld $8,temp
+ psrld $24,state3
+ por temp,state3
+
+ /* state2 += state3, state1 = rotl32(state1 ^ state2, 7) */
+ paddd state3,state2
+ pxor state2,state1
+ movdqa state1,temp
+ pslld $7,temp
+ psrld $25,state1
+ por temp,state1
+
+ /* state1[0,1,2,3] = state1[3,0,1,2] */
+ pshufd $0x93,state1,state1
+ /* state2[0,1,2,3] = state2[2,3,0,1] */
+ pshufd $0x4e,state2,state2
+ /* state3[0,1,2,3] = state3[1,2,3,0] */
+ pshufd $0x39,state3,state3
+
+ decb i
+ jnz .Lpermute
+
+ /* output0 = state0 + copy0 */
+ paddd copy0,state0
+ movups state0,0x00(output)
+ /* output1 = state1 + copy1 */
+ paddd copy1,state1
+ movups state1,0x10(output)
+ /* output2 = state2 + copy2 */
+ paddd copy2,state2
+ movups state2,0x20(output)
+ /* output3 = state3 + copy3 */
+ paddd copy3,state3
+ movups state3,0x30(output)
+
+ /* ++copy3.counter */
+ paddq one,copy3
+
+ /* output += 64, --nblocks */
+ addq $64,output
+ decq nblocks
+ jnz .Lblock
+
+ /* counter = copy3.counter */
+ movq copy3,0x00(counter)
+
+ /* Zero out the potentially sensitive regs, in case nothing uses these again. */
+ pxor state0,state0
+ pxor state1,state1
+ pxor state2,state2
+ pxor state3,state3
+ pxor copy1,copy1
+ pxor copy2,copy2
+ pxor temp,temp
+
+ ret
+SYM_FUNC_END(__arch_chacha20_blocks_nostack)
diff --git a/arch/x86/entry/vdso/vgetrandom.c b/arch/x86/entry/vdso/vgetrandom.c
new file mode 100644
index 000000000000..52d3c7faae2e
--- /dev/null
+++ b/arch/x86/entry/vdso/vgetrandom.c
@@ -0,0 +1,17 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2022-2024 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ */
+#include <linux/types.h>
+
+#include "../../../../lib/vdso/getrandom.c"
+
+ssize_t __vdso_getrandom(void *buffer, size_t len, unsigned int flags, void *opaque_state, size_t opaque_len);
+
+ssize_t __vdso_getrandom(void *buffer, size_t len, unsigned int flags, void *opaque_state, size_t opaque_len)
+{
+ return __cvdso_getrandom(buffer, len, flags, opaque_state, opaque_len);
+}
+
+ssize_t getrandom(void *, size_t, unsigned int, void *, size_t)
+ __attribute__((weak, alias("__vdso_getrandom")));
diff --git a/arch/x86/include/asm/vdso/getrandom.h b/arch/x86/include/asm/vdso/getrandom.h
new file mode 100644
index 000000000000..b96e674cafde
--- /dev/null
+++ b/arch/x86/include/asm/vdso/getrandom.h
@@ -0,0 +1,55 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2022-2024 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ */
+#ifndef __ASM_VDSO_GETRANDOM_H
+#define __ASM_VDSO_GETRANDOM_H
+
+#ifndef __ASSEMBLY__
+
+#include <asm/unistd.h>
+#include <asm/vvar.h>
+
+/**
+ * getrandom_syscall - Invoke the getrandom() syscall.
+ * @buffer: Destination buffer to fill with random bytes.
+ * @len: Size of @buffer in bytes.
+ * @flags: Zero or more GRND_* flags.
+ * Returns: The number of random bytes written to @buffer, or a negative value indicating an error.
+ */
+static __always_inline ssize_t getrandom_syscall(void *buffer, size_t len, unsigned int flags)
+{
+ long ret;
+
+ asm ("syscall" : "=a" (ret) :
+ "0" (__NR_getrandom), "D" (buffer), "S" (len), "d" (flags) :
+ "rcx", "r11", "memory");
+
+ return ret;
+}
+
+#define __vdso_rng_data (VVAR(_vdso_rng_data))
+
+static __always_inline const struct vdso_rng_data *__arch_get_vdso_rng_data(void)
+{
+ if (IS_ENABLED(CONFIG_TIME_NS) && __vdso_data->clock_mode == VDSO_CLOCKMODE_TIMENS)
+ return (void *)&__vdso_rng_data + ((void *)&__timens_vdso_data - (void *)&__vdso_data);
+ return &__vdso_rng_data;
+}
+
+/**
+ * __arch_chacha20_blocks_nostack - Generate ChaCha20 stream without using the stack.
+ * @dst_bytes: Destination buffer to hold @nblocks * 64 bytes of output.
+ * @key: 32-byte input key.
+ * @counter: 8-byte counter, read on input and updated on return.
+ * @nblocks: Number of blocks to generate.
+ *
+ * Generates a given positive number of blocks of ChaCha20 output with nonce=0, and does not write
+ * to any stack or memory outside of the parameters passed to it, in order to mitigate stack data
+ * leaking into forked child processes.
+ */
+extern void __arch_chacha20_blocks_nostack(u8 *dst_bytes, const u32 *key, u32 *counter, size_t nblocks);
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* __ASM_VDSO_GETRANDOM_H */
diff --git a/arch/x86/include/asm/vdso/vsyscall.h b/arch/x86/include/asm/vdso/vsyscall.h
index be199a9b2676..71c56586a22f 100644
--- a/arch/x86/include/asm/vdso/vsyscall.h
+++ b/arch/x86/include/asm/vdso/vsyscall.h
@@ -11,6 +11,8 @@
#include <asm/vvar.h>
DEFINE_VVAR(struct vdso_data, _vdso_data);
+DEFINE_VVAR_SINGLE(struct vdso_rng_data, _vdso_rng_data);
+
/*
* Update the vDSO data page to keep in sync with kernel timekeeping.
*/
diff --git a/arch/x86/include/asm/vvar.h b/arch/x86/include/asm/vvar.h
index 183e98e49ab9..9d9af37f7cab 100644
--- a/arch/x86/include/asm/vvar.h
+++ b/arch/x86/include/asm/vvar.h
@@ -26,6 +26,8 @@
*/
#define DECLARE_VVAR(offset, type, name) \
EMIT_VVAR(name, offset)
+#define DECLARE_VVAR_SINGLE(offset, type, name) \
+ EMIT_VVAR(name, offset)
#else
@@ -37,6 +39,10 @@ extern char __vvar_page;
extern type timens_ ## name[CS_BASES] \
__attribute__((visibility("hidden"))); \
+#define DECLARE_VVAR_SINGLE(offset, type, name) \
+ extern type vvar_ ## name \
+ __attribute__((visibility("hidden"))); \
+
#define VVAR(name) (vvar_ ## name)
#define TIMENS(name) (timens_ ## name)
@@ -44,12 +50,22 @@ extern char __vvar_page;
type name[CS_BASES] \
__attribute__((section(".vvar_" #name), aligned(16))) __visible
+#define DEFINE_VVAR_SINGLE(type, name) \
+ type name \
+ __attribute__((section(".vvar_" #name), aligned(16))) __visible
+
#endif
/* DECLARE_VVAR(offset, type, name) */
DECLARE_VVAR(128, struct vdso_data, _vdso_data)
+#if !defined(_SINGLE_DATA)
+#define _SINGLE_DATA
+DECLARE_VVAR_SINGLE(640, struct vdso_rng_data, _vdso_rng_data)
+#endif
+
#undef DECLARE_VVAR
+#undef DECLARE_VVAR_SINGLE
#endif
--
2.45.2
^ permalink raw reply related [flat|nested] 39+ messages in thread
* [PATCH v22 4/4] selftests/vDSO: add tests for vgetrandom
2024-07-09 13:05 [PATCH v22 0/4] implement getrandom() in vDSO Jason A. Donenfeld
` (2 preceding siblings ...)
2024-07-09 13:05 ` [PATCH v22 3/4] x86: vdso: Wire up getrandom() vDSO implementation Jason A. Donenfeld
@ 2024-07-09 13:05 ` Jason A. Donenfeld
3 siblings, 0 replies; 39+ messages in thread
From: Jason A. Donenfeld @ 2024-07-09 13:05 UTC (permalink / raw)
To: linux-kernel, patches, tglx
Cc: Jason A. Donenfeld, linux-crypto, linux-api, x86, Linus Torvalds,
Greg Kroah-Hartman, Adhemerval Zanella Netto, Carlos O'Donell,
Florian Weimer, Arnd Bergmann, Jann Horn, Christian Brauner,
David Hildenbrand, linux-kselftest
This adds two tests for vgetrandom. The first one, vdso_test_chacha,
simply checks that the assembly implementation of chacha20 matches that
of libsodium, a basic sanity check that should catch most errors. The
second, vdso_test_getrandom, is a full "libc-like" implementation of the
userspace side of vgetrandom() support. It's meant to be used also as
example code for libcs that might be integrating this.
Cc: linux-kselftest@vger.kernel.org
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
tools/include/asm/rwonce.h | 0
tools/testing/selftests/vDSO/.gitignore | 2 +
tools/testing/selftests/vDSO/Makefile | 18 ++
.../testing/selftests/vDSO/vdso_test_chacha.c | 43 +++
.../selftests/vDSO/vdso_test_getrandom.c | 288 ++++++++++++++++++
5 files changed, 351 insertions(+)
create mode 100644 tools/include/asm/rwonce.h
create mode 100644 tools/testing/selftests/vDSO/vdso_test_chacha.c
create mode 100644 tools/testing/selftests/vDSO/vdso_test_getrandom.c
diff --git a/tools/include/asm/rwonce.h b/tools/include/asm/rwonce.h
new file mode 100644
index 000000000000..e69de29bb2d1
diff --git a/tools/testing/selftests/vDSO/.gitignore b/tools/testing/selftests/vDSO/.gitignore
index a8dc51af5a9c..30d5c8f0e5c7 100644
--- a/tools/testing/selftests/vDSO/.gitignore
+++ b/tools/testing/selftests/vDSO/.gitignore
@@ -6,3 +6,5 @@ vdso_test_correctness
vdso_test_gettimeofday
vdso_test_getcpu
vdso_standalone_test_x86
+vdso_test_getrandom
+vdso_test_chacha
diff --git a/tools/testing/selftests/vDSO/Makefile b/tools/testing/selftests/vDSO/Makefile
index 98d8ba2afa00..3de8e7e052ae 100644
--- a/tools/testing/selftests/vDSO/Makefile
+++ b/tools/testing/selftests/vDSO/Makefile
@@ -1,6 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
uname_M := $(shell uname -m 2>/dev/null || echo not)
ARCH ?= $(shell echo $(uname_M) | sed -e s/i.86/x86/ -e s/x86_64/x86/)
+SODIUM := $(shell pkg-config --libs libsodium 2>/dev/null)
TEST_GEN_PROGS := vdso_test_gettimeofday
TEST_GEN_PROGS += vdso_test_getcpu
@@ -10,6 +11,12 @@ ifeq ($(ARCH),$(filter $(ARCH),x86 x86_64))
TEST_GEN_PROGS += vdso_standalone_test_x86
endif
TEST_GEN_PROGS += vdso_test_correctness
+ifeq ($(uname_M),x86_64)
+TEST_GEN_PROGS += vdso_test_getrandom
+ifneq ($(SODIUM),)
+TEST_GEN_PROGS += vdso_test_chacha
+endif
+endif
CFLAGS := -std=gnu99
@@ -28,3 +35,14 @@ $(OUTPUT)/vdso_standalone_test_x86: CFLAGS +=-nostdlib -fno-asynchronous-unwind-
$(OUTPUT)/vdso_test_correctness: vdso_test_correctness.c
$(OUTPUT)/vdso_test_correctness: LDFLAGS += -ldl
+
+$(OUTPUT)/vdso_test_getrandom: parse_vdso.c
+$(OUTPUT)/vdso_test_getrandom: CFLAGS += -isystem $(top_srcdir)/tools/include \
+ -isystem $(top_srcdir)/include/uapi
+
+$(OUTPUT)/vdso_test_chacha: $(top_srcdir)/arch/$(ARCH)/entry/vdso/vgetrandom-chacha.S
+$(OUTPUT)/vdso_test_chacha: CFLAGS += -idirafter $(top_srcdir)/tools/include \
+ -isystem $(top_srcdir)/arch/$(ARCH)/include \
+ -isystem $(top_srcdir)/include \
+ -D__ASSEMBLY__ -DBULID_VDSO -DCONFIG_FUNCTION_ALIGNMENT=0 \
+ -Wa,--noexecstack $(SODIUM)
diff --git a/tools/testing/selftests/vDSO/vdso_test_chacha.c b/tools/testing/selftests/vDSO/vdso_test_chacha.c
new file mode 100644
index 000000000000..e38f44e5f803
--- /dev/null
+++ b/tools/testing/selftests/vDSO/vdso_test_chacha.c
@@ -0,0 +1,43 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2022-2024 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ */
+
+#include <sodium/crypto_stream_chacha20.h>
+#include <sys/random.h>
+#include <string.h>
+#include <stdint.h>
+#include "../kselftest.h"
+
+extern void __arch_chacha20_blocks_nostack(uint8_t *dst_bytes, const uint8_t *key, uint32_t *counter, size_t nblocks);
+
+int main(int argc, char *argv[])
+{
+ enum { TRIALS = 1000, BLOCKS = 128, BLOCK_SIZE = 64 };
+ static const uint8_t nonce[8] = { 0 };
+ uint32_t counter[2];
+ uint8_t key[32];
+ uint8_t output1[BLOCK_SIZE * BLOCKS], output2[BLOCK_SIZE * BLOCKS];
+
+ ksft_print_header();
+ ksft_set_plan(1);
+
+ for (unsigned int trial = 0; trial < TRIALS; ++trial) {
+ if (getrandom(key, sizeof(key), 0) != sizeof(key)) {
+ printf("getrandom() failed!\n");
+ return KSFT_SKIP;
+ }
+ crypto_stream_chacha20(output1, sizeof(output1), nonce, key);
+ for (unsigned int split = 0; split < BLOCKS; ++split) {
+ memset(output2, 'X', sizeof(output2));
+ memset(counter, 0, sizeof(counter));
+ if (split)
+ __arch_chacha20_blocks_nostack(output2, key, counter, split);
+ __arch_chacha20_blocks_nostack(output2 + split * BLOCK_SIZE, key, counter, BLOCKS - split);
+ if (memcmp(output1, output2, sizeof(output1)))
+ return KSFT_FAIL;
+ }
+ }
+ ksft_test_result_pass("chacha: PASS\n");
+ return KSFT_PASS;
+}
diff --git a/tools/testing/selftests/vDSO/vdso_test_getrandom.c b/tools/testing/selftests/vDSO/vdso_test_getrandom.c
new file mode 100644
index 000000000000..05122425a873
--- /dev/null
+++ b/tools/testing/selftests/vDSO/vdso_test_getrandom.c
@@ -0,0 +1,288 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2022-2024 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
+ */
+
+#include <assert.h>
+#include <pthread.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <time.h>
+#include <unistd.h>
+#include <signal.h>
+#include <sys/auxv.h>
+#include <sys/mman.h>
+#include <sys/random.h>
+#include <sys/syscall.h>
+#include <sys/types.h>
+#include <linux/random.h>
+
+#include "../kselftest.h"
+#include "parse_vdso.h"
+
+#ifndef timespecsub
+#define timespecsub(tsp, usp, vsp) \
+ do { \
+ (vsp)->tv_sec = (tsp)->tv_sec - (usp)->tv_sec; \
+ (vsp)->tv_nsec = (tsp)->tv_nsec - (usp)->tv_nsec; \
+ if ((vsp)->tv_nsec < 0) { \
+ (vsp)->tv_sec--; \
+ (vsp)->tv_nsec += 1000000000L; \
+ } \
+ } while (0)
+#endif
+
+static struct {
+ pthread_mutex_t lock;
+ void **states;
+ size_t len, cap;
+} grnd_allocator = {
+ .lock = PTHREAD_MUTEX_INITIALIZER
+};
+
+static struct {
+ ssize_t(*fn)(void *, size_t, unsigned long, void *, size_t);
+ pthread_key_t key;
+ pthread_once_t initialized;
+ struct vgetrandom_opaque_params params;
+} grnd_ctx = {
+ .initialized = PTHREAD_ONCE_INIT
+};
+
+static void *vgetrandom_get_state(void)
+{
+ void *state = NULL;
+
+ pthread_mutex_lock(&grnd_allocator.lock);
+ if (!grnd_allocator.len) {
+ size_t page_size = getpagesize();
+ size_t new_cap;
+ size_t alloc_size, num = sysconf(_SC_NPROCESSORS_ONLN); /* Just a decent heuristic. */
+ void *new_block, *new_states;
+
+ alloc_size = (num * grnd_ctx.params.size_of_opaque_state + page_size - 1) & (~(page_size - 1));
+ num = (page_size / grnd_ctx.params.size_of_opaque_state) * (alloc_size / page_size);
+ new_block = mmap(0, alloc_size, grnd_ctx.params.mmap_prot, grnd_ctx.params.mmap_flags, -1, 0);
+ if (new_block == MAP_FAILED)
+ goto out;
+
+ new_cap = grnd_allocator.cap + num;
+ new_states = reallocarray(grnd_allocator.states, new_cap, sizeof(*grnd_allocator.states));
+ if (!new_states)
+ goto unmap;
+ grnd_allocator.cap = new_cap;
+ grnd_allocator.states = new_states;
+
+ for (size_t i = 0; i < num; ++i) {
+ if (((uintptr_t)new_block & (page_size - 1)) + grnd_ctx.params.size_of_opaque_state > page_size)
+ new_block = (void *)(((uintptr_t)new_block + page_size - 1) & (~(page_size - 1)));
+ grnd_allocator.states[i] = new_block;
+ new_block += grnd_ctx.params.size_of_opaque_state;
+ }
+ grnd_allocator.len = num;
+ goto success;
+
+ unmap:
+ munmap(new_block, alloc_size);
+ goto out;
+ }
+success:
+ state = grnd_allocator.states[--grnd_allocator.len];
+
+out:
+ pthread_mutex_unlock(&grnd_allocator.lock);
+ return state;
+}
+
+static void vgetrandom_put_state(void *state)
+{
+ if (!state)
+ return;
+ pthread_mutex_lock(&grnd_allocator.lock);
+ grnd_allocator.states[grnd_allocator.len++] = state;
+ pthread_mutex_unlock(&grnd_allocator.lock);
+}
+
+static void vgetrandom_init(void)
+{
+ if (pthread_key_create(&grnd_ctx.key, vgetrandom_put_state) != 0)
+ return;
+ unsigned long sysinfo_ehdr = getauxval(AT_SYSINFO_EHDR);
+ if (!sysinfo_ehdr) {
+ printf("AT_SYSINFO_EHDR is not present!\n");
+ exit(KSFT_SKIP);
+ }
+ vdso_init_from_sysinfo_ehdr(sysinfo_ehdr);
+ grnd_ctx.fn = (__typeof__(grnd_ctx.fn))vdso_sym("LINUX_2.6", "__vdso_getrandom");
+ if (!grnd_ctx.fn) {
+ printf("__vdso_getrandom is missing!\n");
+ exit(KSFT_FAIL);
+ }
+ if (grnd_ctx.fn(NULL, 0, 0, &grnd_ctx.params, ~0UL) != 0) {
+ printf("failed to fetch vgetrandom params!\n");
+ exit(KSFT_FAIL);
+ }
+}
+
+static ssize_t vgetrandom(void *buf, size_t len, unsigned long flags)
+{
+ void *state;
+
+ pthread_once(&grnd_ctx.initialized, vgetrandom_init);
+ state = pthread_getspecific(grnd_ctx.key);
+ if (!state) {
+ state = vgetrandom_get_state();
+ if (pthread_setspecific(grnd_ctx.key, state) != 0) {
+ vgetrandom_put_state(state);
+ state = NULL;
+ }
+ if (!state) {
+ printf("vgetrandom_get_state failed!\n");
+ exit(KSFT_FAIL);
+ }
+ }
+ return grnd_ctx.fn(buf, len, flags, state, grnd_ctx.params.size_of_opaque_state);
+}
+
+enum { TRIALS = 25000000, THREADS = 256 };
+
+static void *test_vdso_getrandom(void *)
+{
+ for (size_t i = 0; i < TRIALS; ++i) {
+ unsigned int val;
+ ssize_t ret = vgetrandom(&val, sizeof(val), 0);
+ assert(ret == sizeof(val));
+ }
+ return NULL;
+}
+
+static void *test_libc_getrandom(void *)
+{
+ for (size_t i = 0; i < TRIALS; ++i) {
+ unsigned int val;
+ ssize_t ret = getrandom(&val, sizeof(val), 0);
+ assert(ret == sizeof(val));
+ }
+ return NULL;
+}
+
+static void *test_syscall_getrandom(void *)
+{
+ for (size_t i = 0; i < TRIALS; ++i) {
+ unsigned int val;
+ ssize_t ret = syscall(__NR_getrandom, &val, sizeof(val), 0);
+ assert(ret == sizeof(val));
+ }
+ return NULL;
+}
+
+static void bench_single(void)
+{
+ struct timespec start, end, diff;
+
+ clock_gettime(CLOCK_MONOTONIC, &start);
+ test_vdso_getrandom(NULL);
+ clock_gettime(CLOCK_MONOTONIC, &end);
+ timespecsub(&end, &start, &diff);
+ printf(" vdso: %u times in %lu.%09lu seconds\n", TRIALS, diff.tv_sec, diff.tv_nsec);
+
+ clock_gettime(CLOCK_MONOTONIC, &start);
+ test_libc_getrandom(NULL);
+ clock_gettime(CLOCK_MONOTONIC, &end);
+ timespecsub(&end, &start, &diff);
+ printf(" libc: %u times in %lu.%09lu seconds\n", TRIALS, diff.tv_sec, diff.tv_nsec);
+
+ clock_gettime(CLOCK_MONOTONIC, &start);
+ test_syscall_getrandom(NULL);
+ clock_gettime(CLOCK_MONOTONIC, &end);
+ timespecsub(&end, &start, &diff);
+ printf("syscall: %u times in %lu.%09lu seconds\n", TRIALS, diff.tv_sec, diff.tv_nsec);
+}
+
+static void bench_multi(void)
+{
+ struct timespec start, end, diff;
+ pthread_t threads[THREADS];
+
+ clock_gettime(CLOCK_MONOTONIC, &start);
+ for (size_t i = 0; i < THREADS; ++i)
+ assert(pthread_create(&threads[i], NULL, test_vdso_getrandom, NULL) == 0);
+ for (size_t i = 0; i < THREADS; ++i)
+ pthread_join(threads[i], NULL);
+ clock_gettime(CLOCK_MONOTONIC, &end);
+ timespecsub(&end, &start, &diff);
+ printf(" vdso: %u x %u times in %lu.%09lu seconds\n", TRIALS, THREADS, diff.tv_sec, diff.tv_nsec);
+
+ clock_gettime(CLOCK_MONOTONIC, &start);
+ for (size_t i = 0; i < THREADS; ++i)
+ assert(pthread_create(&threads[i], NULL, test_libc_getrandom, NULL) == 0);
+ for (size_t i = 0; i < THREADS; ++i)
+ pthread_join(threads[i], NULL);
+ clock_gettime(CLOCK_MONOTONIC, &end);
+ timespecsub(&end, &start, &diff);
+ printf(" libc: %u x %u times in %lu.%09lu seconds\n", TRIALS, THREADS, diff.tv_sec, diff.tv_nsec);
+
+ clock_gettime(CLOCK_MONOTONIC, &start);
+ for (size_t i = 0; i < THREADS; ++i)
+ assert(pthread_create(&threads[i], NULL, test_syscall_getrandom, NULL) == 0);
+ for (size_t i = 0; i < THREADS; ++i)
+ pthread_join(threads[i], NULL);
+ clock_gettime(CLOCK_MONOTONIC, &end);
+ timespecsub(&end, &start, &diff);
+ printf(" syscall: %u x %u times in %lu.%09lu seconds\n", TRIALS, THREADS, diff.tv_sec, diff.tv_nsec);
+}
+
+static void fill(void)
+{
+ uint8_t weird_size[323929];
+ for (;;)
+ vgetrandom(weird_size, sizeof(weird_size), 0);
+}
+
+static void kselftest(void)
+{
+ uint8_t weird_size[1263];
+
+ ksft_print_header();
+ ksft_set_plan(1);
+
+ for (size_t i = 0; i < 1000; ++i) {
+ ssize_t ret = vgetrandom(weird_size, sizeof(weird_size), 0);
+ if (ret != sizeof(weird_size))
+ exit(KSFT_FAIL);
+ }
+
+ ksft_test_result_pass("getrandom: PASS\n");
+ exit(KSFT_PASS);
+}
+
+static void usage(const char *argv0)
+{
+ fprintf(stderr, "Usage: %s [bench-single|bench-multi|fill]\n", argv0);
+}
+
+int main(int argc, char *argv[])
+{
+ if (argc == 1) {
+ kselftest();
+ return 0;
+ }
+
+ if (argc != 2) {
+ usage(argv[0]);
+ return 1;
+ }
+ if (!strcmp(argv[1], "bench-single"))
+ bench_single();
+ else if (!strcmp(argv[1], "bench-multi"))
+ bench_multi();
+ else if (!strcmp(argv[1], "fill"))
+ fill();
+ else {
+ usage(argv[0]);
+ return 1;
+ }
+ return 0;
+}
--
2.45.2
^ permalink raw reply related [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-09 13:05 ` [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings Jason A. Donenfeld
@ 2024-07-10 3:27 ` David Hildenbrand
2024-07-10 4:05 ` David Hildenbrand
2024-07-11 22:29 ` David Hildenbrand
0 siblings, 2 replies; 39+ messages in thread
From: David Hildenbrand @ 2024-07-10 3:27 UTC (permalink / raw)
To: Jason A. Donenfeld, linux-kernel, patches, tglx
Cc: linux-crypto, linux-api, x86, Linus Torvalds, Greg Kroah-Hartman,
Adhemerval Zanella Netto, Carlos O'Donell, Florian Weimer,
Arnd Bergmann, Jann Horn, Christian Brauner, David Hildenbrand,
linux-mm
On 09.07.24 15:05, Jason A. Donenfeld wrote:
> The vDSO getrandom() implementation works with a buffer allocated with a
> new system call that has certain requirements:
>
> - It shouldn't be written to core dumps.
> * Easy: VM_DONTDUMP.
> - It should be zeroed on fork.
> * Easy: VM_WIPEONFORK.
>
> - It shouldn't be written to swap.
> * Uh-oh: mlock is rlimited.
> * Uh-oh: mlock isn't inherited by forks.
>
> It turns out that the vDSO getrandom() function has three really nice
> characteristics that we can exploit to solve this problem:
>
> 1) Due to being wiped during fork(), the vDSO code is already robust to
> having the contents of the pages it reads zeroed out midway through
> the function's execution.
>
> 2) In the absolute worst case of whatever contingency we're coding for,
> we have the option to fallback to the getrandom() syscall, and
> everything is fine.
>
> 3) The buffers the function uses are only ever useful for a maximum of
> 60 seconds -- a sort of cache, rather than a long term allocation.
>
> These characteristics mean that we can introduce VM_DROPPABLE, which
> has the following semantics:
>
> a) It never is written out to swap.
> b) Under memory pressure, mm can just drop the pages (so that they're
> zero when read back again).
> c) It is inherited by fork.
> d) It doesn't count against the mlock budget, since nothing is locked.
>
> This is fairly simple to implement, with the one snag that we have to
> use 64-bit VM_* flags, but this shouldn't be a problem, since the only
> consumers will probably be 64-bit anyway.
>
> This way, allocations used by vDSO getrandom() can use:
>
> VM_DROPPABLE | VM_DONTDUMP | VM_WIPEONFORK | VM_NORESERVE
>
> And there will be no problem with using memory when not in use, not
> wiping on fork(), coredumps, or writing out to swap.
>
> In order to let vDSO getrandom() use this, expose these via mmap(2) as
> MAP_DROPPABLE.
>
> Finally, the provided self test ensures that this is working as desired.
Acked-by: David Hildenbrand <david@redhat.com>
I'll try to think of some corner cases we might be missing.
As raised, I think we could do better at naming, such as "MAP_FREEABLE"
to match MADV_FREE, MAP_VOLATILE, ... but if nobody else care, I shall
not care :)
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-10 3:27 ` David Hildenbrand
@ 2024-07-10 4:05 ` David Hildenbrand
2024-07-11 0:44 ` Jason A. Donenfeld
2024-07-11 22:29 ` David Hildenbrand
1 sibling, 1 reply; 39+ messages in thread
From: David Hildenbrand @ 2024-07-10 4:05 UTC (permalink / raw)
To: Jason A. Donenfeld, linux-kernel, patches, tglx
Cc: linux-crypto, linux-api, x86, Linus Torvalds, Greg Kroah-Hartman,
Adhemerval Zanella Netto, Carlos O'Donell, Florian Weimer,
Arnd Bergmann, Jann Horn, Christian Brauner, David Hildenbrand,
linux-mm
On 10.07.24 05:27, David Hildenbrand wrote:
> On 09.07.24 15:05, Jason A. Donenfeld wrote:
>> The vDSO getrandom() implementation works with a buffer allocated with a
>> new system call that has certain requirements:
>>
>> - It shouldn't be written to core dumps.
>> * Easy: VM_DONTDUMP.
>> - It should be zeroed on fork.
>> * Easy: VM_WIPEONFORK.
>>
>> - It shouldn't be written to swap.
>> * Uh-oh: mlock is rlimited.
>> * Uh-oh: mlock isn't inherited by forks.
>>
>> It turns out that the vDSO getrandom() function has three really nice
>> characteristics that we can exploit to solve this problem:
>>
>> 1) Due to being wiped during fork(), the vDSO code is already robust to
>> having the contents of the pages it reads zeroed out midway through
>> the function's execution.
>>
>> 2) In the absolute worst case of whatever contingency we're coding for,
>> we have the option to fallback to the getrandom() syscall, and
>> everything is fine.
>>
>> 3) The buffers the function uses are only ever useful for a maximum of
>> 60 seconds -- a sort of cache, rather than a long term allocation.
>>
>> These characteristics mean that we can introduce VM_DROPPABLE, which
>> has the following semantics:
>>
>> a) It never is written out to swap.
>> b) Under memory pressure, mm can just drop the pages (so that they're
>> zero when read back again).
>> c) It is inherited by fork.
>> d) It doesn't count against the mlock budget, since nothing is locked.
>>
>> This is fairly simple to implement, with the one snag that we have to
>> use 64-bit VM_* flags, but this shouldn't be a problem, since the only
>> consumers will probably be 64-bit anyway.
>>
>> This way, allocations used by vDSO getrandom() can use:
>>
>> VM_DROPPABLE | VM_DONTDUMP | VM_WIPEONFORK | VM_NORESERVE
>>
>> And there will be no problem with using memory when not in use, not
>> wiping on fork(), coredumps, or writing out to swap.
>>
>> In order to let vDSO getrandom() use this, expose these via mmap(2) as
>> MAP_DROPPABLE.
>>
>> Finally, the provided self test ensures that this is working as desired.
>
> Acked-by: David Hildenbrand <david@redhat.com>
>
>
> I'll try to think of some corner cases we might be missing.
BTW, do we have to handle the folio_set_swapbacked() in sort_folio() as well?
/* dirty lazyfree */
if (type == LRU_GEN_FILE && folio_test_anon(folio) && folio_test_dirty(folio)) {
success = lru_gen_del_folio(lruvec, folio, true);
VM_WARN_ON_ONCE_FOLIO(!success, folio);
folio_set_swapbacked(folio);
lruvec_add_folio_tail(lruvec, folio);
return true;
}
Maybe more difficult because we don't have a VMA here ... hmm
IIUC, we have to make sure that no folio_set_swapbacked() would ever get
performed on these folios, correct?
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-10 4:05 ` David Hildenbrand
@ 2024-07-11 0:44 ` Jason A. Donenfeld
2024-07-11 4:32 ` Jason A. Donenfeld
0 siblings, 1 reply; 39+ messages in thread
From: Jason A. Donenfeld @ 2024-07-11 0:44 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, patches, tglx, linux-crypto, linux-api, x86,
Linus Torvalds, Greg Kroah-Hartman, Adhemerval Zanella Netto,
Carlos O'Donell, Florian Weimer, Arnd Bergmann, Jann Horn,
Christian Brauner, David Hildenbrand, linux-mm
Hi David,
On Wed, Jul 10, 2024 at 06:05:34AM +0200, David Hildenbrand wrote:
> BTW, do we have to handle the folio_set_swapbacked() in sort_folio() as well?
>
>
> /* dirty lazyfree */
> if (type == LRU_GEN_FILE && folio_test_anon(folio) && folio_test_dirty(folio)) {
> success = lru_gen_del_folio(lruvec, folio, true);
> VM_WARN_ON_ONCE_FOLIO(!success, folio);
> folio_set_swapbacked(folio);
> lruvec_add_folio_tail(lruvec, folio);
> return true;
> }
>
> Maybe more difficult because we don't have a VMA here ... hmm
>
> IIUC, we have to make sure that no folio_set_swapbacked() would ever get
> performed on these folios, correct?
Hmmm, I'm trying to figure out what to do here, and if we have to do
something. All three conditions in that if statement will be true for a
folio in a droppable mapping. That's supposed to match MADV_FREE
mappings.
What is the context of this, though? It's scanning pages for good ones
to evict into swap, right? So if it encounters one that's an MADV_FREE
page, it actually just wants to delete it, rather than sending it to
swap. So it looks like it does just that, and then sets the swapbacked
bit back to true, in case the folio is used for something differnet
later?
If that's correct, then I don't think we need to do anything for this
one.
If that's not correct, then we'll need to propagate the droppableness
to the folio level. But hopefully we don't need to do that.
What's your analysis of this like?
Jason
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 0:44 ` Jason A. Donenfeld
@ 2024-07-11 4:32 ` Jason A. Donenfeld
2024-07-11 4:46 ` David Hildenbrand
0 siblings, 1 reply; 39+ messages in thread
From: Jason A. Donenfeld @ 2024-07-11 4:32 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, patches, tglx, linux-crypto, linux-api, x86,
Linus Torvalds, Greg Kroah-Hartman, Adhemerval Zanella Netto,
Carlos O'Donell, Florian Weimer, Arnd Bergmann, Jann Horn,
Christian Brauner, David Hildenbrand, linux-mm
On Thu, Jul 11, 2024 at 02:44:29AM +0200, Jason A. Donenfeld wrote:
> Hi David,
>
> On Wed, Jul 10, 2024 at 06:05:34AM +0200, David Hildenbrand wrote:
> > BTW, do we have to handle the folio_set_swapbacked() in sort_folio() as well?
> >
> >
> > /* dirty lazyfree */
> > if (type == LRU_GEN_FILE && folio_test_anon(folio) && folio_test_dirty(folio)) {
> > success = lru_gen_del_folio(lruvec, folio, true);
> > VM_WARN_ON_ONCE_FOLIO(!success, folio);
> > folio_set_swapbacked(folio);
> > lruvec_add_folio_tail(lruvec, folio);
> > return true;
> > }
> >
> > Maybe more difficult because we don't have a VMA here ... hmm
> >
> > IIUC, we have to make sure that no folio_set_swapbacked() would ever get
> > performed on these folios, correct?
>
> Hmmm, I'm trying to figure out what to do here, and if we have to do
> something. All three conditions in that if statement will be true for a
> folio in a droppable mapping. That's supposed to match MADV_FREE
> mappings.
>
> What is the context of this, though? It's scanning pages for good ones
> to evict into swap, right? So if it encounters one that's an MADV_FREE
> page, it actually just wants to delete it, rather than sending it to
> swap. So it looks like it does just that, and then sets the swapbacked
> bit back to true, in case the folio is used for something differnet
> later?
>
> If that's correct, then I don't think we need to do anything for this
> one.
>
> If that's not correct, then we'll need to propagate the droppableness
> to the folio level. But hopefully we don't need to do that.
Looks like that's not correct. This is for pages that have been dirtied
since calling MADV_FREE. So, hm.
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 4:32 ` Jason A. Donenfeld
@ 2024-07-11 4:46 ` David Hildenbrand
2024-07-11 5:07 ` Linus Torvalds
0 siblings, 1 reply; 39+ messages in thread
From: David Hildenbrand @ 2024-07-11 4:46 UTC (permalink / raw)
To: Jason A. Donenfeld
Cc: linux-kernel, patches, tglx, linux-crypto, linux-api, x86,
Linus Torvalds, Greg Kroah-Hartman, Adhemerval Zanella Netto,
Carlos O'Donell, Florian Weimer, Arnd Bergmann, Jann Horn,
Christian Brauner, David Hildenbrand, linux-mm
On 11.07.24 06:32, Jason A. Donenfeld wrote:
> On Thu, Jul 11, 2024 at 02:44:29AM +0200, Jason A. Donenfeld wrote:
>> Hi David,
>>
>> On Wed, Jul 10, 2024 at 06:05:34AM +0200, David Hildenbrand wrote:
>>> BTW, do we have to handle the folio_set_swapbacked() in sort_folio() as well?
>>>
>>>
>>> /* dirty lazyfree */
>>> if (type == LRU_GEN_FILE && folio_test_anon(folio) && folio_test_dirty(folio)) {
>>> success = lru_gen_del_folio(lruvec, folio, true);
>>> VM_WARN_ON_ONCE_FOLIO(!success, folio);
>>> folio_set_swapbacked(folio);
>>> lruvec_add_folio_tail(lruvec, folio);
>>> return true;
>>> }
>>>
>>> Maybe more difficult because we don't have a VMA here ... hmm
>>>
>>> IIUC, we have to make sure that no folio_set_swapbacked() would ever get
>>> performed on these folios, correct?
>>
>> Hmmm, I'm trying to figure out what to do here, and if we have to do
>> something. All three conditions in that if statement will be true for a
>> folio in a droppable mapping. That's supposed to match MADV_FREE
>> mappings.
>>
>> What is the context of this, though? It's scanning pages for good ones
>> to evict into swap, right? So if it encounters one that's an MADV_FREE
>> page, it actually just wants to delete it, rather than sending it to
>> swap. So it looks like it does just that, and then sets the swapbacked
>> bit back to true, in case the folio is used for something differnet
>> later?
>>
>> If that's correct, then I don't think we need to do anything for this
>> one.
>>
>> If that's not correct, then we'll need to propagate the droppableness
>> to the folio level. But hopefully we don't need to do that.
>
> Looks like that's not correct. This is for pages that have been dirtied
> since calling MADV_FREE. So, hm.
>
Maybe we can find ways of simply never marking these pages dirty, so we
don't have to special-case that code where we don't really have a VMA at
hand?
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 4:46 ` David Hildenbrand
@ 2024-07-11 5:07 ` Linus Torvalds
2024-07-11 17:09 ` Jason A. Donenfeld
0 siblings, 1 reply; 39+ messages in thread
From: Linus Torvalds @ 2024-07-11 5:07 UTC (permalink / raw)
To: David Hildenbrand
Cc: Jason A. Donenfeld, linux-kernel, patches, tglx, linux-crypto,
linux-api, x86, Greg Kroah-Hartman, Adhemerval Zanella Netto,
Carlos O'Donell, Florian Weimer, Arnd Bergmann, Jann Horn,
Christian Brauner, David Hildenbrand, linux-mm
On Wed, 10 Jul 2024 at 21:46, David Hildenbrand <david@redhat.com> wrote:
>
> Maybe we can find ways of simply never marking these pages dirty, so we
> don't have to special-case that code where we don't really have a VMA at
> hand?
That's one option. Jason's patch basically goes "ignore folio dirty
bit for these pages".
Your suggestion basically says "don't turn folios dirty in the first place".
It's mainly the pte_dirty games in mm/vmscan.c that does it
(walk_pte_range), but also the tear-down in mm/memory.c
(zap_present_folio_ptes). Possibly others that I didn't think of.
Both do have access to the vma, although in the case of
walk_pte_range() we don't actually pass it down because we haven't
needed it).
There's also page_vma_mkclean_one(), try_to_unmap_one() and
try_to_migrate_one(). And possibly many others I haven't even thought
about.
So quite a few places that do that "transfer dirty bit from pte to folio".
The other approach might be to just let all the dirty handling happen
- make droppable pages have a "page->mapping" (and not be anonymous),
and have the mapping->a_ops->writepage() just always return success
immediately.
That might actually be a conceptually simpler model. MAP_DROPPABLE
becomes a shared mapping that just has a really cheap writeback that
throws the data away. No need to worry about swap cache or anything
like that, because that's just for anonymous pages.
I say "conceptually simpler", because right now the patch does depend
on just using the regular anon page faulting etc code.
Linus
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 5:07 ` Linus Torvalds
@ 2024-07-11 17:09 ` Jason A. Donenfeld
2024-07-11 17:17 ` Jason A. Donenfeld
2024-07-11 17:57 ` Linus Torvalds
0 siblings, 2 replies; 39+ messages in thread
From: Jason A. Donenfeld @ 2024-07-11 17:09 UTC (permalink / raw)
To: Linus Torvalds
Cc: David Hildenbrand, linux-kernel, patches, tglx, linux-crypto,
linux-api, x86, Greg Kroah-Hartman, Adhemerval Zanella Netto,
Carlos O'Donell, Florian Weimer, Arnd Bergmann, Jann Horn,
Christian Brauner, David Hildenbrand, linux-mm
Hi Linus, David,
On Wed, Jul 10, 2024 at 10:07:03PM -0700, Linus Torvalds wrote:
> The other approach might be to just let all the dirty handling happen
> - make droppable pages have a "page->mapping" (and not be anonymous),
> and have the mapping->a_ops->writepage() just always return success
> immediately.
When I was working on this patchset this year with the syscall, this is
similar somewhat to the initial approach I was taking with setting up a
special mapping. It turned into kind of a mess and I couldn't get it
working. There's a lot of functionality built around anonymous pages
that would need to be duplicated (I think?). I'll revisit it if need be,
but let's see if I can make avoiding the dirty bit propagation work.
> It's mainly the pte_dirty games in mm/vmscan.c that does it
> (walk_pte_range), but also the tear-down in mm/memory.c
> (zap_present_folio_ptes). Possibly others that I didn't think of.
>
> Both do have access to the vma, although in the case of
> walk_pte_range() we don't actually pass it down because we haven't
> needed it).
Actually, it's there hanging out in args->vma, and the function makes
use of that member already. So not so bad.
>
> There's also page_vma_mkclean_one(), try_to_unmap_one() and
> try_to_migrate_one(). And possibly many others I haven't even thought
> about.
>
> So quite a few places that do that "transfer dirty bit from pte to folio".
Alright, an hour later of fiddling, and it doesn't actually work (yet?)
-- the selftest fails. A diff follows below.
So, hmm... The swapbacked thing really seemed so simple... I wonder if
there's a way of recovering that.
Jason
diff --git a/mm/gup.c b/mm/gup.c
index ca0f5cedce9b..38745cc4fa06 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -990,7 +990,8 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
}
if (flags & FOLL_TOUCH) {
if ((flags & FOLL_WRITE) &&
- !pte_dirty(pte) && !PageDirty(page))
+ !pte_dirty(pte) && !PageDirty(page) &&
+ !(vma->vm_flags & VM_DROPPABLE))
set_page_dirty(page);
/*
* pte_mkyoung() would be more correct here, but atomic care
diff --git a/mm/ksm.c b/mm/ksm.c
index 34c4820e0d3d..2401fc4203ba 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1339,7 +1339,7 @@ static int write_protect_page(struct vm_area_struct *vma, struct folio *folio,
goto out_unlock;
}
- if (pte_dirty(entry))
+ if (pte_dirty(entry) && !(vma->vm_flags & VM_DROPPABLE))
folio_mark_dirty(folio);
entry = pte_mkclean(entry);
@@ -1518,7 +1518,7 @@ static int try_to_merge_one_page(struct vm_area_struct *vma,
* Page reclaim just frees a clean page with no dirty
* ptes: make sure that the ksm page would be swapped.
*/
- if (!PageDirty(page))
+ if (!PageDirty(page) && !(vma->vm_flags & VM_DROPPABLE))
SetPageDirty(page);
err = 0;
} else if (pages_identical(page, kpage))
diff --git a/mm/memory.c b/mm/memory.c
index d10e616d7389..6a02d16309be 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1479,7 +1479,7 @@ static __always_inline void zap_present_folio_ptes(struct mmu_gather *tlb,
if (!folio_test_anon(folio)) {
ptent = get_and_clear_full_ptes(mm, addr, pte, nr, tlb->fullmm);
- if (pte_dirty(ptent)) {
+ if (pte_dirty(ptent) && !(vma->vm_flags & VM_DROPPABLE)) {
folio_mark_dirty(folio);
if (tlb_delay_rmap(tlb)) {
delay_rmap = true;
@@ -6140,7 +6140,8 @@ static int __access_remote_vm(struct mm_struct *mm, unsigned long addr,
if (write) {
copy_to_user_page(vma, page, addr,
maddr + offset, buf, bytes);
- set_page_dirty_lock(page);
+ if (!(vma->vm_flags & VM_DROPPABLE))
+ set_page_dirty_lock(page);
} else {
copy_from_user_page(vma, page, addr,
buf, maddr + offset, bytes);
diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index aecc71972a87..72d3f8eaae6e 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -216,7 +216,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp,
migrate->cpages++;
/* Set the dirty flag on the folio now the pte is gone. */
- if (pte_dirty(pte))
+ if (pte_dirty(pte) && !(vma->vm_flags & VM_DROPPABLE))
folio_mark_dirty(folio);
/* Setup special migration page table entry */
diff --git a/mm/rmap.c b/mm/rmap.c
index 1f9b5a9cb121..1688d06bb617 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1397,12 +1397,7 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio);
VM_BUG_ON_VMA(address < vma->vm_start ||
address + (nr << PAGE_SHIFT) > vma->vm_end, vma);
- /*
- * VM_DROPPABLE mappings don't swap; instead they're just dropped when
- * under memory pressure.
- */
- if (!(vma->vm_flags & VM_DROPPABLE))
- __folio_set_swapbacked(folio);
+ __folio_set_swapbacked(folio);
__folio_set_anon(folio, vma, address, true);
if (likely(!folio_test_large(folio))) {
@@ -1777,7 +1772,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
pte_install_uffd_wp_if_needed(vma, address, pvmw.pte, pteval);
/* Set the dirty flag on the folio now the pte is gone. */
- if (pte_dirty(pteval))
+ if (pte_dirty(pteval) && !(vma->vm_flags & VM_DROPPABLE))
folio_mark_dirty(folio);
/* Update high watermark before we lower rss */
@@ -1822,7 +1817,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
}
/* MADV_FREE page check */
- if (!folio_test_swapbacked(folio)) {
+ if (!folio_test_swapbacked(folio) || (vma->vm_flags & VM_DROPPABLE)) {
int ref_count, map_count;
/*
@@ -1846,13 +1841,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
* plus the rmap(s) (dropped by discard:).
*/
if (ref_count == 1 + map_count &&
- (!folio_test_dirty(folio) ||
- /*
- * Unlike MADV_FREE mappings, VM_DROPPABLE
- * ones can be dropped even if they've
- * been dirtied.
- */
- (vma->vm_flags & VM_DROPPABLE))) {
+ !folio_test_dirty(folio)) {
dec_mm_counter(mm, MM_ANONPAGES);
goto discard;
}
@@ -1862,12 +1851,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
* discarded. Remap the page to page table.
*/
set_pte_at(mm, address, pvmw.pte, pteval);
- /*
- * Unlike MADV_FREE mappings, VM_DROPPABLE ones
- * never get swap backed on failure to drop.
- */
- if (!(vma->vm_flags & VM_DROPPABLE))
- folio_set_swapbacked(folio);
+ folio_set_swapbacked(folio);
ret = false;
page_vma_mapped_walk_done(&pvmw);
break;
@@ -2151,7 +2135,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
}
/* Set the dirty flag on the folio now the pte is gone. */
- if (pte_dirty(pteval))
+ if (pte_dirty(pteval) && !(vma->vm_flags & VM_DROPPABLE))
folio_mark_dirty(folio);
/* Update high watermark before we lower rss */
@@ -2397,7 +2381,7 @@ static bool page_make_device_exclusive_one(struct folio *folio,
pteval = ptep_clear_flush(vma, address, pvmw.pte);
/* Set the dirty flag on the folio now the pte is gone. */
- if (pte_dirty(pteval))
+ if (pte_dirty(pteval) && !(vma->vm_flags & VM_DROPPABLE))
folio_mark_dirty(folio);
/*
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 2e34de9cd0d4..cf5b26bd067a 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -3396,6 +3396,7 @@ static bool walk_pte_range(pmd_t *pmd, unsigned long start, unsigned long end,
walk->mm_stats[MM_LEAF_YOUNG]++;
if (pte_dirty(ptent) && !folio_test_dirty(folio) &&
+ !(args->vma->vm_flags & VM_DROPPABLE) &&
!(folio_test_anon(folio) && folio_test_swapbacked(folio) &&
!folio_test_swapcache(folio)))
folio_mark_dirty(folio);
@@ -3476,6 +3477,7 @@ static void walk_pmd_range_locked(pud_t *pud, unsigned long addr, struct vm_area
walk->mm_stats[MM_LEAF_YOUNG]++;
if (pmd_dirty(pmd[i]) && !folio_test_dirty(folio) &&
+ !(vma->vm_flags && VM_DROPPABLE) &&
!(folio_test_anon(folio) && folio_test_swapbacked(folio) &&
!folio_test_swapcache(folio)))
folio_mark_dirty(folio);
@@ -4076,6 +4078,7 @@ void lru_gen_look_around(struct page_vma_mapped_walk *pvmw)
young++;
if (pte_dirty(ptent) && !folio_test_dirty(folio) &&
+ !(vma->vm_flags & VM_DROPPABLE) &&
!(folio_test_anon(folio) && folio_test_swapbacked(folio) &&
!folio_test_swapcache(folio)))
folio_mark_dirty(folio);
^ permalink raw reply related [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 17:09 ` Jason A. Donenfeld
@ 2024-07-11 17:17 ` Jason A. Donenfeld
2024-07-11 17:24 ` David Hildenbrand
2024-07-11 17:57 ` Linus Torvalds
1 sibling, 1 reply; 39+ messages in thread
From: Jason A. Donenfeld @ 2024-07-11 17:17 UTC (permalink / raw)
To: Linus Torvalds
Cc: David Hildenbrand, linux-kernel, patches, tglx, linux-crypto,
linux-api, x86, Greg Kroah-Hartman, Adhemerval Zanella Netto,
Carlos O'Donell, Florian Weimer, Arnd Bergmann, Jann Horn,
Christian Brauner, David Hildenbrand, linux-mm
On Thu, Jul 11, 2024 at 07:09:36PM +0200, Jason A. Donenfeld wrote:
> So, hmm... The swapbacked thing really seemed so simple... I wonder if
> there's a way of recovering that.
Not wanting to introduce a new bitflag, I went looking and noticed this:
/*
* Private page markings that may be used by the filesystem that owns the page
* for its own purposes.
* - PG_private and PG_private_2 cause release_folio() and co to be invoked
*/
PAGEFLAG(Private, private, PF_ANY)
PAGEFLAG(Private2, private_2, PF_ANY) TESTSCFLAG(Private2, private_2, PF_ANY)
PAGEFLAG(OwnerPriv1, owner_priv_1, PF_ANY)
TESTCLEARFLAG(OwnerPriv1, owner_priv_1, PF_ANY)
The below +4/-1 diff is pretty hacky and might be illegal in the state
of California, but I think it does work. The idea is that if that bit is
normally only used for filesystems, then in the anonymous case, it's
free to be used for this.
Any opinions about this, or a suggestion on how to do that in a less
ugly way?
Jason
diff --git a/mm/rmap.c b/mm/rmap.c
index 1f9b5a9cb121..090554277e4a 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1403,6 +1403,8 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
*/
if (!(vma->vm_flags & VM_DROPPABLE))
__folio_set_swapbacked(folio);
+ else
+ folio_set_owner_priv_1(folio);
__folio_set_anon(folio, vma, address, true);
if (likely(!folio_test_large(folio))) {
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 2e34de9cd0d4..398b46027e8f 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4266,7 +4266,8 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
}
/* dirty lazyfree */
- if (type == LRU_GEN_FILE && folio_test_anon(folio) && folio_test_dirty(folio)) {
+ if (type == LRU_GEN_FILE && folio_test_anon(folio) &&
+ folio_test_dirty(folio) && !folio_test_owner_priv_1(folio)) {
success = lru_gen_del_folio(lruvec, folio, true);
VM_WARN_ON_ONCE_FOLIO(!success, folio);
folio_set_swapbacked(folio);
^ permalink raw reply related [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 17:17 ` Jason A. Donenfeld
@ 2024-07-11 17:24 ` David Hildenbrand
2024-07-11 17:27 ` David Hildenbrand
2024-07-11 17:49 ` Jason A. Donenfeld
0 siblings, 2 replies; 39+ messages in thread
From: David Hildenbrand @ 2024-07-11 17:24 UTC (permalink / raw)
To: Jason A. Donenfeld, Linus Torvalds
Cc: linux-kernel, patches, tglx, linux-crypto, linux-api, x86,
Greg Kroah-Hartman, Adhemerval Zanella Netto, Carlos O'Donell,
Florian Weimer, Arnd Bergmann, Jann Horn, Christian Brauner,
David Hildenbrand, linux-mm
On 11.07.24 19:17, Jason A. Donenfeld wrote:
> On Thu, Jul 11, 2024 at 07:09:36PM +0200, Jason A. Donenfeld wrote:
>> So, hmm... The swapbacked thing really seemed so simple... I wonder if
>> there's a way of recovering that.
>
> Not wanting to introduce a new bitflag, I went looking and noticed this:
>
> /*
> * Private page markings that may be used by the filesystem that owns the page
> * for its own purposes.
> * - PG_private and PG_private_2 cause release_folio() and co to be invoked
> */
> PAGEFLAG(Private, private, PF_ANY)
> PAGEFLAG(Private2, private_2, PF_ANY) TESTSCFLAG(Private2, private_2, PF_ANY)
> PAGEFLAG(OwnerPriv1, owner_priv_1, PF_ANY)
> TESTCLEARFLAG(OwnerPriv1, owner_priv_1, PF_ANY)
>
> The below +4/-1 diff is pretty hacky and might be illegal in the state
> of California, but I think it does work. The idea is that if that bit is
> normally only used for filesystems, then in the anonymous case, it's
> free to be used for this.
>
> Any opinions about this, or a suggestion on how to do that in a less
> ugly way?
>
> Jason
>
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 1f9b5a9cb121..090554277e4a 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1403,6 +1403,8 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
> */
> if (!(vma->vm_flags & VM_DROPPABLE))
> __folio_set_swapbacked(folio);
> + else
> + folio_set_owner_priv_1(folio);
PG_owner_priv_1 maps to PG_swapcache? :)
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 17:24 ` David Hildenbrand
@ 2024-07-11 17:27 ` David Hildenbrand
2024-07-11 17:54 ` Jason A. Donenfeld
2024-07-11 17:49 ` Jason A. Donenfeld
1 sibling, 1 reply; 39+ messages in thread
From: David Hildenbrand @ 2024-07-11 17:27 UTC (permalink / raw)
To: Jason A. Donenfeld, Linus Torvalds
Cc: linux-kernel, patches, tglx, linux-crypto, linux-api, x86,
Greg Kroah-Hartman, Adhemerval Zanella Netto, Carlos O'Donell,
Florian Weimer, Arnd Bergmann, Jann Horn, Christian Brauner,
David Hildenbrand, linux-mm
On 11.07.24 19:24, David Hildenbrand wrote:
> On 11.07.24 19:17, Jason A. Donenfeld wrote:
>> On Thu, Jul 11, 2024 at 07:09:36PM +0200, Jason A. Donenfeld wrote:
>>> So, hmm... The swapbacked thing really seemed so simple... I wonder if
>>> there's a way of recovering that.
>>
>> Not wanting to introduce a new bitflag, I went looking and noticed this:
>>
>> /*
>> * Private page markings that may be used by the filesystem that owns the page
>> * for its own purposes.
>> * - PG_private and PG_private_2 cause release_folio() and co to be invoked
>> */
>> PAGEFLAG(Private, private, PF_ANY)
>> PAGEFLAG(Private2, private_2, PF_ANY) TESTSCFLAG(Private2, private_2, PF_ANY)
>> PAGEFLAG(OwnerPriv1, owner_priv_1, PF_ANY)
>> TESTCLEARFLAG(OwnerPriv1, owner_priv_1, PF_ANY)
>>
>> The below +4/-1 diff is pretty hacky and might be illegal in the state
>> of California, but I think it does work. The idea is that if that bit is
>> normally only used for filesystems, then in the anonymous case, it's
>> free to be used for this.
>>
>> Any opinions about this, or a suggestion on how to do that in a less
>> ugly way?
>>
>> Jason
>>
>>
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index 1f9b5a9cb121..090554277e4a 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -1403,6 +1403,8 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
>> */
>> if (!(vma->vm_flags & VM_DROPPABLE))
>> __folio_set_swapbacked(folio);
>> + else
>> + folio_set_owner_priv_1(folio);
>
>
> PG_owner_priv_1 maps to PG_swapcache? :)
Maybe the combination !swapbacked && swapcache could be used to indicate
such folios. (we will never set swapbacked)
But likely we have to be a bit careful here. We don't want
folio_test_swapcache() to return for folios that ... are not in the
swapcache.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 17:24 ` David Hildenbrand
2024-07-11 17:27 ` David Hildenbrand
@ 2024-07-11 17:49 ` Jason A. Donenfeld
1 sibling, 0 replies; 39+ messages in thread
From: Jason A. Donenfeld @ 2024-07-11 17:49 UTC (permalink / raw)
To: David Hildenbrand
Cc: Linus Torvalds, linux-kernel, patches, tglx, linux-crypto,
linux-api, x86, Greg Kroah-Hartman, Adhemerval Zanella Netto,
Carlos O'Donell, Florian Weimer, Arnd Bergmann, Jann Horn,
Christian Brauner, David Hildenbrand, linux-mm
On Thu, Jul 11, 2024 at 07:24:33PM +0200, David Hildenbrand wrote:
> On 11.07.24 19:17, Jason A. Donenfeld wrote:
> > On Thu, Jul 11, 2024 at 07:09:36PM +0200, Jason A. Donenfeld wrote:
> >> So, hmm... The swapbacked thing really seemed so simple... I wonder if
> >> there's a way of recovering that.
> >
> > Not wanting to introduce a new bitflag, I went looking and noticed this:
> >
> > /*
> > * Private page markings that may be used by the filesystem that owns the page
> > * for its own purposes.
> > * - PG_private and PG_private_2 cause release_folio() and co to be invoked
> > */
> > PAGEFLAG(Private, private, PF_ANY)
> > PAGEFLAG(Private2, private_2, PF_ANY) TESTSCFLAG(Private2, private_2, PF_ANY)
> > PAGEFLAG(OwnerPriv1, owner_priv_1, PF_ANY)
> > TESTCLEARFLAG(OwnerPriv1, owner_priv_1, PF_ANY)
> >
> > The below +4/-1 diff is pretty hacky and might be illegal in the state
> > of California, but I think it does work. The idea is that if that bit is
> > normally only used for filesystems, then in the anonymous case, it's
> > free to be used for this.
> >
> > Any opinions about this, or a suggestion on how to do that in a less
> > ugly way?
> >
> > Jason
> >
> >
> > diff --git a/mm/rmap.c b/mm/rmap.c
> > index 1f9b5a9cb121..090554277e4a 100644
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -1403,6 +1403,8 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
> > */
> > if (!(vma->vm_flags & VM_DROPPABLE))
> > __folio_set_swapbacked(folio);
> > + else
> > + folio_set_owner_priv_1(folio);
>
>
> PG_owner_priv_1 maps to PG_swapcache? :)
Oh, drat, it looks like this overloading is nothing new then.
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 17:27 ` David Hildenbrand
@ 2024-07-11 17:54 ` Jason A. Donenfeld
2024-07-11 17:56 ` Jason A. Donenfeld
0 siblings, 1 reply; 39+ messages in thread
From: Jason A. Donenfeld @ 2024-07-11 17:54 UTC (permalink / raw)
To: David Hildenbrand
Cc: Linus Torvalds, linux-kernel, patches, tglx, linux-crypto,
linux-api, x86, Greg Kroah-Hartman, Adhemerval Zanella Netto,
Carlos O'Donell, Florian Weimer, Arnd Bergmann, Jann Horn,
Christian Brauner, David Hildenbrand, linux-mm
On Thu, Jul 11, 2024 at 07:27:27PM +0200, David Hildenbrand wrote:
> > PG_owner_priv_1 maps to PG_swapcache? :)
>
> Maybe the combination !swapbacked && swapcache could be used to indicate
> such folios. (we will never set swapbacked)
>
> But likely we have to be a bit careful here. We don't want
> folio_test_swapcache() to return for folios that ... are not in the
> swapcache.
I was thinking that too, but I'm afraid it's going to be another
whack-a-mole nightmare. Even for things like task_mmu in procfs that
show stats, that's going to be wonky.
Any other flags we can overload that aren't going to be already used in
our case?
Jason
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 17:54 ` Jason A. Donenfeld
@ 2024-07-11 17:56 ` Jason A. Donenfeld
2024-07-11 18:08 ` Jason A. Donenfeld
0 siblings, 1 reply; 39+ messages in thread
From: Jason A. Donenfeld @ 2024-07-11 17:56 UTC (permalink / raw)
To: David Hildenbrand
Cc: Linus Torvalds, linux-kernel, patches, tglx, linux-crypto,
linux-api, x86, Greg Kroah-Hartman, Adhemerval Zanella Netto,
Carlos O'Donell, Florian Weimer, Arnd Bergmann, Jann Horn,
Christian Brauner, David Hildenbrand, linux-mm
On Thu, Jul 11, 2024 at 07:54:34PM +0200, Jason A. Donenfeld wrote:
> On Thu, Jul 11, 2024 at 07:27:27PM +0200, David Hildenbrand wrote:
> > > PG_owner_priv_1 maps to PG_swapcache? :)
> >
> > Maybe the combination !swapbacked && swapcache could be used to indicate
> > such folios. (we will never set swapbacked)
> >
> > But likely we have to be a bit careful here. We don't want
> > folio_test_swapcache() to return for folios that ... are not in the
> > swapcache.
>
> I was thinking that too, but I'm afraid it's going to be another
> whack-a-mole nightmare. Even for things like task_mmu in procfs that
> show stats, that's going to be wonky.
>
> Any other flags we can overload that aren't going to be already used in
> our case?
PG_error / folio_set_error seems unused in the non-IO case.
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 17:09 ` Jason A. Donenfeld
2024-07-11 17:17 ` Jason A. Donenfeld
@ 2024-07-11 17:57 ` Linus Torvalds
2024-07-11 19:07 ` David Hildenbrand
2024-07-11 20:07 ` Jason A. Donenfeld
1 sibling, 2 replies; 39+ messages in thread
From: Linus Torvalds @ 2024-07-11 17:57 UTC (permalink / raw)
To: Jason A. Donenfeld
Cc: David Hildenbrand, linux-kernel, patches, tglx, linux-crypto,
linux-api, x86, Greg Kroah-Hartman, Adhemerval Zanella Netto,
Carlos O'Donell, Florian Weimer, Arnd Bergmann, Jann Horn,
Christian Brauner, David Hildenbrand, linux-mm
On Thu, 11 Jul 2024 at 10:09, Jason A. Donenfeld <Jason@zx2c4.com> wrote:
>
> When I was working on this patchset this year with the syscall, this is
> similar somewhat to the initial approach I was taking with setting up a
> special mapping. It turned into kind of a mess and I couldn't get it
> working. There's a lot of functionality built around anonymous pages
> that would need to be duplicated (I think?).
Yeah, I was kind of assuming that. You'd need to handle VM_DROPPABLE
in the fault path specially, the way we currently split up based on
vma_is_anonymous(), eg
if (vma_is_anonymous(vmf->vma))
return do_anonymous_page(vmf);
else
return do_fault(vmf);
in do_pte_missing() etc.
I don't actually think it would be too hard, but it's a more
"conceptual" change, and it's probably not worth it.
> Alright, an hour later of fiddling, and it doesn't actually work (yet?)
> -- the selftest fails. A diff follows below.
May I suggest a slightly different approach: do what we did for "pte_mkwrite()".
It needed the vma too, for not too dissimilar reasons: special dirty
bit handling for the shadow stack. See
bb3aadf7d446 ("x86/mm: Start actually marking _PAGE_SAVED_DIRTY")
b497e52ddb2a ("x86/mm: Teach pte_mkwrite() about stack memory")
and now we have "pte_mkwrite_novma()" with the old semantics for the
legacy cases that didn't get converted - whether it's because the
architecture doesn't have the issue, or because it's a kernel pte.
And the conversion was actually quite pain-free, because we have
#ifndef pte_mkwrite
static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma)
{
return pte_mkwrite_novma(pte);
}
#endif
so all any architecture that didn't want this needed to do was to
rename their pte_mkwrite() to pte_mkwrite_novma() and they were done.
In fact, that was done first as basically semantically no-op patches:
2f0584f3f4bd ("mm: Rename arch pte_mkwrite()'s to pte_mkwrite_novma()")
6ecc21bb432d ("mm: Move pte/pmd_mkwrite() callers with no VMA to _novma()")
161e393c0f63 ("mm: Make pte_mkwrite() take a VMA")
which made this all very pain-free (and was largely a sed script, I think).
> - !pte_dirty(pte) && !PageDirty(page))
> + !pte_dirty(pte) && !PageDirty(page) &&
> + !(vma->vm_flags & VM_DROPPABLE))
So instead of this kind of thing, we'd have
> - !pte_dirty(pte) && !PageDirty(page))
> + !pte_dirty(pte, vma) && !PageDirty(page) &&
and the advantage here is that you can't miss anybody by mistake. The
compiler will be very unhappy if you don't pass in the vma, and then
any places that would be converted to "pte_dirty_novma()"
We don't actually have all that many users of pte_dirty(), so it
doesn't look too nasty. And if we make the pte_dirty() semantics
depend on the vma, I really think we should do it the same way we did
pte_mkwrite().
Long-term, maybe we should just aim to always pass in the vma to the
pte_xyz() functions, but...
Linus
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 17:56 ` Jason A. Donenfeld
@ 2024-07-11 18:08 ` Jason A. Donenfeld
2024-07-11 18:24 ` David Hildenbrand
0 siblings, 1 reply; 39+ messages in thread
From: Jason A. Donenfeld @ 2024-07-11 18:08 UTC (permalink / raw)
To: David Hildenbrand
Cc: Linus Torvalds, linux-kernel, patches, tglx, linux-crypto,
linux-api, x86, Greg Kroah-Hartman, Adhemerval Zanella Netto,
Carlos O'Donell, Florian Weimer, Arnd Bergmann, Jann Horn,
Christian Brauner, David Hildenbrand, linux-mm
On Thu, Jul 11, 2024 at 07:56:39PM +0200, Jason A. Donenfeld wrote:
> On Thu, Jul 11, 2024 at 07:54:34PM +0200, Jason A. Donenfeld wrote:
> > On Thu, Jul 11, 2024 at 07:27:27PM +0200, David Hildenbrand wrote:
> > > > PG_owner_priv_1 maps to PG_swapcache? :)
> > >
> > > Maybe the combination !swapbacked && swapcache could be used to indicate
> > > such folios. (we will never set swapbacked)
> > >
> > > But likely we have to be a bit careful here. We don't want
> > > folio_test_swapcache() to return for folios that ... are not in the
> > > swapcache.
> >
> > I was thinking that too, but I'm afraid it's going to be another
> > whack-a-mole nightmare. Even for things like task_mmu in procfs that
> > show stats, that's going to be wonky.
> >
> > Any other flags we can overload that aren't going to be already used in
> > our case?
>
> PG_error / folio_set_error seems unused in the non-IO case.
And PG_large_rmappable seems to only be used for hugetlb branches.
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index b9e914e1face..7fdc03197438 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -190,6 +190,7 @@ enum pageflags {
/* At least one page in this folio has the hwpoison flag set */
PG_has_hwpoisoned = PG_error,
PG_large_rmappable = PG_workingset, /* anon or file-backed */
+ PG_droppable = PG_error, /* anon droppable, not hugetlb */
};
#define PAGEFLAGS_MASK ((1UL << NR_PAGEFLAGS) - 1)
@@ -640,6 +641,8 @@ FOLIO_TEST_CLEAR_FLAG_FALSE(young)
FOLIO_FLAG_FALSE(idle)
#endif
+FOLIO_FLAG(droppable, FOLIO_SECOND_PAGE)
+
/*
* PageReported() is used to track reported free pages within the Buddy
* allocator. We can use the non-atomic version of the test and set
diff --git a/mm/rmap.c b/mm/rmap.c
index 1f9b5a9cb121..73b4052b2f82 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1403,6 +1403,8 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
*/
if (!(vma->vm_flags & VM_DROPPABLE))
__folio_set_swapbacked(folio);
+ else
+ folio_set_droppable(folio);
__folio_set_anon(folio, vma, address, true);
if (likely(!folio_test_large(folio))) {
@@ -1852,7 +1854,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
* ones can be dropped even if they've
* been dirtied.
*/
- (vma->vm_flags & VM_DROPPABLE))) {
+ folio_test_droppable(folio))) {
dec_mm_counter(mm, MM_ANONPAGES);
goto discard;
}
@@ -1866,7 +1868,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
* Unlike MADV_FREE mappings, VM_DROPPABLE ones
* never get swap backed on failure to drop.
*/
- if (!(vma->vm_flags & VM_DROPPABLE))
+ if (!folio_test_droppable(folio))
folio_set_swapbacked(folio);
ret = false;
page_vma_mapped_walk_done(&pvmw);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 2e34de9cd0d4..41340f2a12c7 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4266,7 +4266,8 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
}
/* dirty lazyfree */
- if (type == LRU_GEN_FILE && folio_test_anon(folio) && folio_test_dirty(folio)) {
+ if (type == LRU_GEN_FILE && folio_test_anon(folio) &&
+ folio_test_dirty(folio) && !folio_test_droppable(folio)) {
success = lru_gen_del_folio(lruvec, folio, true);
VM_WARN_ON_ONCE_FOLIO(!success, folio);
folio_set_swapbacked(folio);
^ permalink raw reply related [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 18:08 ` Jason A. Donenfeld
@ 2024-07-11 18:24 ` David Hildenbrand
2024-07-11 18:54 ` Jason A. Donenfeld
0 siblings, 1 reply; 39+ messages in thread
From: David Hildenbrand @ 2024-07-11 18:24 UTC (permalink / raw)
To: Jason A. Donenfeld
Cc: Linus Torvalds, linux-kernel, patches, tglx, linux-crypto,
linux-api, x86, Greg Kroah-Hartman, Adhemerval Zanella Netto,
Carlos O'Donell, Florian Weimer, Arnd Bergmann, Jann Horn,
Christian Brauner, David Hildenbrand, linux-mm
On 11.07.24 20:08, Jason A. Donenfeld wrote:
> On Thu, Jul 11, 2024 at 07:56:39PM +0200, Jason A. Donenfeld wrote:
>> On Thu, Jul 11, 2024 at 07:54:34PM +0200, Jason A. Donenfeld wrote:
>>> On Thu, Jul 11, 2024 at 07:27:27PM +0200, David Hildenbrand wrote:
>>>>> PG_owner_priv_1 maps to PG_swapcache? :)
>>>>
>>>> Maybe the combination !swapbacked && swapcache could be used to indicate
>>>> such folios. (we will never set swapbacked)
>>>>
>>>> But likely we have to be a bit careful here. We don't want
>>>> folio_test_swapcache() to return for folios that ... are not in the
>>>> swapcache.
>>>
>>> I was thinking that too, but I'm afraid it's going to be another
>>> whack-a-mole nightmare. Even for things like task_mmu in procfs that
>>> show stats, that's going to be wonky.
>>>
>>> Any other flags we can overload that aren't going to be already used in
>>> our case?
>>
>> PG_error / folio_set_error seems unused in the non-IO case.
>
Note that Willy is about to remove PG_error IIRC.
> And PG_large_rmappable seems to only be used for hugetlb branches.
It should be set for THP/large folios.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 18:24 ` David Hildenbrand
@ 2024-07-11 18:54 ` Jason A. Donenfeld
2024-07-11 18:56 ` David Hildenbrand
0 siblings, 1 reply; 39+ messages in thread
From: Jason A. Donenfeld @ 2024-07-11 18:54 UTC (permalink / raw)
To: David Hildenbrand
Cc: Linus Torvalds, linux-kernel, patches, tglx, linux-crypto,
linux-api, x86, Greg Kroah-Hartman, Adhemerval Zanella Netto,
Carlos O'Donell, Florian Weimer, Arnd Bergmann, Jann Horn,
Christian Brauner, David Hildenbrand, linux-mm
On Thu, Jul 11, 2024 at 08:24:07PM +0200, David Hildenbrand wrote:
> > And PG_large_rmappable seems to only be used for hugetlb branches.
>
> It should be set for THP/large folios.
And it's tested too, apparently.
Okay, well, how disappointing is this below? Because I'm running out of
tricks for flag reuse.
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index b9e914e1face..c1ea49a7f198 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -110,6 +110,7 @@ enum pageflags {
PG_workingset,
PG_error,
PG_owner_priv_1, /* Owner use. If pagecache, fs may use*/
+ PG_owner_priv_2,
PG_arch_1,
PG_reserved,
PG_private, /* If pagecache, has fs-private data */
@@ -190,6 +191,9 @@ enum pageflags {
/* At least one page in this folio has the hwpoison flag set */
PG_has_hwpoisoned = PG_error,
PG_large_rmappable = PG_workingset, /* anon or file-backed */
+
+ /* Zero page under memory pressure. */
+ PG_droppable = PG_owner_priv_2,
};
#define PAGEFLAGS_MASK ((1UL << NR_PAGEFLAGS) - 1)
@@ -549,6 +553,8 @@ PAGEFLAG(Private, private, PF_ANY)
PAGEFLAG(Private2, private_2, PF_ANY) TESTSCFLAG(Private2, private_2, PF_ANY)
PAGEFLAG(OwnerPriv1, owner_priv_1, PF_ANY)
TESTCLEARFLAG(OwnerPriv1, owner_priv_1, PF_ANY)
+PAGEFLAG(OwnerPriv2, owner_priv_2, PF_ANY)
+ TESTCLEARFLAG(OwnerPriv2, owner_priv_2, PF_ANY)
/*
* Only test-and-set exist for PG_writeback. The unconditional operators are
@@ -640,6 +646,8 @@ FOLIO_TEST_CLEAR_FLAG_FALSE(young)
FOLIO_FLAG_FALSE(idle)
#endif
+FOLIO_FLAG(droppable, FOLIO_SECOND_PAGE)
+
/*
* PageReported() is used to track reported free pages within the Buddy
* allocator. We can use the non-atomic version of the test and set
diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h
index b63d211bd141..986551588805 100644
--- a/include/trace/events/mmflags.h
+++ b/include/trace/events/mmflags.h
@@ -108,6 +108,7 @@
DEF_PAGEFLAG_NAME(active), \
DEF_PAGEFLAG_NAME(workingset), \
DEF_PAGEFLAG_NAME(owner_priv_1), \
+ DEF_PAGEFLAG_NAME(owner_priv_2), \
DEF_PAGEFLAG_NAME(arch_1), \
DEF_PAGEFLAG_NAME(reserved), \
DEF_PAGEFLAG_NAME(private), \
diff --git a/mm/rmap.c b/mm/rmap.c
index 1f9b5a9cb121..73b4052b2f82 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1403,6 +1403,8 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
*/
if (!(vma->vm_flags & VM_DROPPABLE))
__folio_set_swapbacked(folio);
+ else
+ folio_set_droppable(folio);
__folio_set_anon(folio, vma, address, true);
if (likely(!folio_test_large(folio))) {
@@ -1852,7 +1854,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
* ones can be dropped even if they've
* been dirtied.
*/
- (vma->vm_flags & VM_DROPPABLE))) {
+ folio_test_droppable(folio))) {
dec_mm_counter(mm, MM_ANONPAGES);
goto discard;
}
@@ -1866,7 +1868,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
* Unlike MADV_FREE mappings, VM_DROPPABLE ones
* never get swap backed on failure to drop.
*/
- if (!(vma->vm_flags & VM_DROPPABLE))
+ if (!folio_test_droppable(folio))
folio_set_swapbacked(folio);
ret = false;
page_vma_mapped_walk_done(&pvmw);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 2e34de9cd0d4..41340f2a12c7 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4266,7 +4266,8 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
}
/* dirty lazyfree */
- if (type == LRU_GEN_FILE && folio_test_anon(folio) && folio_test_dirty(folio)) {
+ if (type == LRU_GEN_FILE && folio_test_anon(folio) &&
+ folio_test_dirty(folio) && !folio_test_droppable(folio)) {
success = lru_gen_del_folio(lruvec, folio, true);
VM_WARN_ON_ONCE_FOLIO(!success, folio);
folio_set_swapbacked(folio);
^ permalink raw reply related [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 18:54 ` Jason A. Donenfeld
@ 2024-07-11 18:56 ` David Hildenbrand
2024-07-11 19:18 ` David Hildenbrand
0 siblings, 1 reply; 39+ messages in thread
From: David Hildenbrand @ 2024-07-11 18:56 UTC (permalink / raw)
To: Jason A. Donenfeld
Cc: Linus Torvalds, linux-kernel, patches, tglx, linux-crypto,
linux-api, x86, Greg Kroah-Hartman, Adhemerval Zanella Netto,
Carlos O'Donell, Florian Weimer, Arnd Bergmann, Jann Horn,
Christian Brauner, David Hildenbrand, linux-mm
On 11.07.24 20:54, Jason A. Donenfeld wrote:
> On Thu, Jul 11, 2024 at 08:24:07PM +0200, David Hildenbrand wrote:
>>> And PG_large_rmappable seems to only be used for hugetlb branches.
>>
>> It should be set for THP/large folios.
>
> And it's tested too, apparently.
>
> Okay, well, how disappointing is this below? Because I'm running out of
> tricks for flag reuse.
>
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index b9e914e1face..c1ea49a7f198 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -110,6 +110,7 @@ enum pageflags {
> PG_workingset,
> PG_error,
> PG_owner_priv_1, /* Owner use. If pagecache, fs may use*/
> + PG_owner_priv_2,
Oh no, no new page flags please :)
Maybe just follow what Linux suggested: pass vma to pte_dirty() and
always return false for these special VMAs.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 17:57 ` Linus Torvalds
@ 2024-07-11 19:07 ` David Hildenbrand
2024-07-11 19:17 ` Linus Torvalds
2024-07-11 20:07 ` Jason A. Donenfeld
1 sibling, 1 reply; 39+ messages in thread
From: David Hildenbrand @ 2024-07-11 19:07 UTC (permalink / raw)
To: Linus Torvalds, Jason A. Donenfeld
Cc: linux-kernel, patches, tglx, linux-crypto, linux-api, x86,
Greg Kroah-Hartman, Adhemerval Zanella Netto, Carlos O'Donell,
Florian Weimer, Arnd Bergmann, Jann Horn, Christian Brauner,
David Hildenbrand, linux-mm
On 11.07.24 19:57, Linus Torvalds wrote:
> On Thu, 11 Jul 2024 at 10:09, Jason A. Donenfeld <Jason@zx2c4.com> wrote:
>>
>> When I was working on this patchset this year with the syscall, this is
>> similar somewhat to the initial approach I was taking with setting up a
>> special mapping. It turned into kind of a mess and I couldn't get it
>> working. There's a lot of functionality built around anonymous pages
>> that would need to be duplicated (I think?).
>
> Yeah, I was kind of assuming that. You'd need to handle VM_DROPPABLE
> in the fault path specially, the way we currently split up based on
> vma_is_anonymous(), eg
>
> if (vma_is_anonymous(vmf->vma))
> return do_anonymous_page(vmf);
> else
> return do_fault(vmf);
>
> in do_pte_missing() etc.
>
> I don't actually think it would be too hard, but it's a more
> "conceptual" change, and it's probably not worth it.
>
>> Alright, an hour later of fiddling, and it doesn't actually work (yet?)
>> -- the selftest fails. A diff follows below.
>
> May I suggest a slightly different approach: do what we did for "pte_mkwrite()".
>
> It needed the vma too, for not too dissimilar reasons: special dirty
> bit handling for the shadow stack. See
>
> bb3aadf7d446 ("x86/mm: Start actually marking _PAGE_SAVED_DIRTY")
> b497e52ddb2a ("x86/mm: Teach pte_mkwrite() about stack memory")
>
> and now we have "pte_mkwrite_novma()" with the old semantics for the
> legacy cases that didn't get converted - whether it's because the
> architecture doesn't have the issue, or because it's a kernel pte.
>
> And the conversion was actually quite pain-free, because we have
>
> #ifndef pte_mkwrite
> static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma)
> {
> return pte_mkwrite_novma(pte);
> }
> #endif
>
> so all any architecture that didn't want this needed to do was to
> rename their pte_mkwrite() to pte_mkwrite_novma() and they were done.
> In fact, that was done first as basically semantically no-op patches:
>
> 2f0584f3f4bd ("mm: Rename arch pte_mkwrite()'s to pte_mkwrite_novma()")
> 6ecc21bb432d ("mm: Move pte/pmd_mkwrite() callers with no VMA to _novma()")
> 161e393c0f63 ("mm: Make pte_mkwrite() take a VMA")
>
> which made this all very pain-free (and was largely a sed script, I think).
>
>> - !pte_dirty(pte) && !PageDirty(page))
>> + !pte_dirty(pte) && !PageDirty(page) &&
>> + !(vma->vm_flags & VM_DROPPABLE))
>
> So instead of this kind of thing, we'd have
>
>> - !pte_dirty(pte) && !PageDirty(page))
>> + !pte_dirty(pte, vma) && !PageDirty(page) &&
>
> and the advantage here is that you can't miss anybody by mistake. The
> compiler will be very unhappy if you don't pass in the vma, and then
> any places that would be converted to "pte_dirty_novma()"
>
> We don't actually have all that many users of pte_dirty(), so it
> doesn't look too nasty. And if we make the pte_dirty() semantics
> depend on the vma, I really think we should do it the same way we did
> pte_mkwrite().
We also have these folio_mark_dirty() calls, for example in
unpin_user_pages_dirty_lock(). Hm ... so preventing the folio from
getting dirtied is likely shaky.
I guess we need a way to just reliably identify these folios :/.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 19:07 ` David Hildenbrand
@ 2024-07-11 19:17 ` Linus Torvalds
2024-07-11 19:22 ` David Hildenbrand
0 siblings, 1 reply; 39+ messages in thread
From: Linus Torvalds @ 2024-07-11 19:17 UTC (permalink / raw)
To: David Hildenbrand
Cc: Jason A. Donenfeld, linux-kernel, patches, tglx, linux-crypto,
linux-api, x86, Greg Kroah-Hartman, Adhemerval Zanella Netto,
Carlos O'Donell, Florian Weimer, Arnd Bergmann, Jann Horn,
Christian Brauner, David Hildenbrand, linux-mm
On Thu, 11 Jul 2024 at 12:08, David Hildenbrand <david@redhat.com> wrote:
>
> We also have these folio_mark_dirty() calls, for example in
> unpin_user_pages_dirty_lock(). Hm ... so preventing the folio from
> getting dirtied is likely shaky.
I do wonder if we should just disallow page pinning for these pages
entirely. When the page can get replaced by zeroes at any time,
pinning it doesn't make much sense.
Except we do have that whole "fast" case that intentionally doesn't
take locks and doesn't have a vma. Darn.
Linus
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 18:56 ` David Hildenbrand
@ 2024-07-11 19:18 ` David Hildenbrand
2024-07-11 19:20 ` David Hildenbrand
0 siblings, 1 reply; 39+ messages in thread
From: David Hildenbrand @ 2024-07-11 19:18 UTC (permalink / raw)
To: Jason A. Donenfeld
Cc: Linus Torvalds, linux-kernel, patches, tglx, linux-crypto,
linux-api, x86, Greg Kroah-Hartman, Adhemerval Zanella Netto,
Carlos O'Donell, Florian Weimer, Arnd Bergmann, Jann Horn,
Christian Brauner, David Hildenbrand, linux-mm, Yu Zhao
On 11.07.24 20:56, David Hildenbrand wrote:
> On 11.07.24 20:54, Jason A. Donenfeld wrote:
>> On Thu, Jul 11, 2024 at 08:24:07PM +0200, David Hildenbrand wrote:
>>>> And PG_large_rmappable seems to only be used for hugetlb branches.
>>>
>>> It should be set for THP/large folios.
>>
>> And it's tested too, apparently.
>>
>> Okay, well, how disappointing is this below? Because I'm running out of
>> tricks for flag reuse.
>>
>> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
>> index b9e914e1face..c1ea49a7f198 100644
>> --- a/include/linux/page-flags.h
>> +++ b/include/linux/page-flags.h
>> @@ -110,6 +110,7 @@ enum pageflags {
>> PG_workingset,
>> PG_error,
>> PG_owner_priv_1, /* Owner use. If pagecache, fs may use*/
>> + PG_owner_priv_2,
>
> Oh no, no new page flags please :)
>
> Maybe just follow what Linux suggested: pass vma to pte_dirty() and
> always return false for these special VMAs.
... or look into removing that one case that gives us headake.
No idea what would happen if we do the following:
CCing Yu Zhao.
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 0761f91b407f..d1dfbd4fd38d 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4280,14 +4280,9 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
return true;
}
- /* dirty lazyfree */
- if (type == LRU_GEN_FILE && folio_test_anon(folio) && folio_test_dirty(folio)) {
- success = lru_gen_del_folio(lruvec, folio, true);
- VM_WARN_ON_ONCE_FOLIO(!success, folio);
- folio_set_swapbacked(folio);
- lruvec_add_folio_tail(lruvec, folio);
- return true;
- }
+ /* lazyfree: we may not be allowed to set swapbacked: MAP_DROPPABLE */
+ if (type == LRU_GEN_FILE && folio_test_anon(folio) && folio_test_dirty(folio))
+ return false;
--
Cheers,
David / dhildenb
^ permalink raw reply related [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 19:18 ` David Hildenbrand
@ 2024-07-11 19:20 ` David Hildenbrand
2024-07-11 19:49 ` Yu Zhao
0 siblings, 1 reply; 39+ messages in thread
From: David Hildenbrand @ 2024-07-11 19:20 UTC (permalink / raw)
To: Jason A. Donenfeld
Cc: Linus Torvalds, linux-kernel, patches, tglx, linux-crypto,
linux-api, x86, Greg Kroah-Hartman, Adhemerval Zanella Netto,
Carlos O'Donell, Florian Weimer, Arnd Bergmann, Jann Horn,
Christian Brauner, David Hildenbrand, linux-mm, Yu Zhao
On 11.07.24 21:18, David Hildenbrand wrote:
> On 11.07.24 20:56, David Hildenbrand wrote:
>> On 11.07.24 20:54, Jason A. Donenfeld wrote:
>>> On Thu, Jul 11, 2024 at 08:24:07PM +0200, David Hildenbrand wrote:
>>>>> And PG_large_rmappable seems to only be used for hugetlb branches.
>>>>
>>>> It should be set for THP/large folios.
>>>
>>> And it's tested too, apparently.
>>>
>>> Okay, well, how disappointing is this below? Because I'm running out of
>>> tricks for flag reuse.
>>>
>>> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
>>> index b9e914e1face..c1ea49a7f198 100644
>>> --- a/include/linux/page-flags.h
>>> +++ b/include/linux/page-flags.h
>>> @@ -110,6 +110,7 @@ enum pageflags {
>>> PG_workingset,
>>> PG_error,
>>> PG_owner_priv_1, /* Owner use. If pagecache, fs may use*/
>>> + PG_owner_priv_2,
>>
>> Oh no, no new page flags please :)
>>
>> Maybe just follow what Linux suggested: pass vma to pte_dirty() and
>> always return false for these special VMAs.
>
> ... or look into removing that one case that gives us headake.
>
> No idea what would happen if we do the following:
>
> CCing Yu Zhao.
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 0761f91b407f..d1dfbd4fd38d 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -4280,14 +4280,9 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
> return true;
> }
>
> - /* dirty lazyfree */
> - if (type == LRU_GEN_FILE && folio_test_anon(folio) && folio_test_dirty(folio)) {
> - success = lru_gen_del_folio(lruvec, folio, true);
> - VM_WARN_ON_ONCE_FOLIO(!success, folio);
> - folio_set_swapbacked(folio);
> - lruvec_add_folio_tail(lruvec, folio);
> - return true;
> - }
> + /* lazyfree: we may not be allowed to set swapbacked: MAP_DROPPABLE */
> + if (type == LRU_GEN_FILE && folio_test_anon(folio) && folio_test_dirty(folio))
> + return false;
Note that something is unclear to me: are we maybe running into that
code also if folio_set_swapbacked() is already set and we are not in the
lazyfree path (in contrast to what is documented)?
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 19:17 ` Linus Torvalds
@ 2024-07-11 19:22 ` David Hildenbrand
0 siblings, 0 replies; 39+ messages in thread
From: David Hildenbrand @ 2024-07-11 19:22 UTC (permalink / raw)
To: Linus Torvalds
Cc: Jason A. Donenfeld, linux-kernel, patches, tglx, linux-crypto,
linux-api, x86, Greg Kroah-Hartman, Adhemerval Zanella Netto,
Carlos O'Donell, Florian Weimer, Arnd Bergmann, Jann Horn,
Christian Brauner, David Hildenbrand, linux-mm
On 11.07.24 21:17, Linus Torvalds wrote:
> On Thu, 11 Jul 2024 at 12:08, David Hildenbrand <david@redhat.com> wrote:
>>
>> We also have these folio_mark_dirty() calls, for example in
>> unpin_user_pages_dirty_lock(). Hm ... so preventing the folio from
>> getting dirtied is likely shaky.
>
> I do wonder if we should just disallow page pinning for these pages
> entirely. When the page can get replaced by zeroes at any time,
> pinning it doesn't make much sense.
>
> Except we do have that whole "fast" case that intentionally doesn't
> take locks and doesn't have a vma. Darn.
Yeah, and I think it should all be simpler; we shouldn't have to
special-case these cases everywhere.
Maybe we can just find a way to not do *folio_set_swapbacked() without a
VMA.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 19:20 ` David Hildenbrand
@ 2024-07-11 19:49 ` Yu Zhao
2024-07-11 19:52 ` Yu Zhao
` (2 more replies)
0 siblings, 3 replies; 39+ messages in thread
From: Yu Zhao @ 2024-07-11 19:49 UTC (permalink / raw)
To: David Hildenbrand
Cc: Jason A. Donenfeld, Linus Torvalds, linux-kernel, patches, tglx,
linux-crypto, linux-api, x86, Greg Kroah-Hartman,
Adhemerval Zanella Netto, Carlos O'Donell, Florian Weimer,
Arnd Bergmann, Jann Horn, Christian Brauner, David Hildenbrand,
linux-mm
On Thu, Jul 11, 2024 at 1:20 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 11.07.24 21:18, David Hildenbrand wrote:
> > On 11.07.24 20:56, David Hildenbrand wrote:
> >> On 11.07.24 20:54, Jason A. Donenfeld wrote:
> >>> On Thu, Jul 11, 2024 at 08:24:07PM +0200, David Hildenbrand wrote:
> >>>>> And PG_large_rmappable seems to only be used for hugetlb branches.
> >>>>
> >>>> It should be set for THP/large folios.
> >>>
> >>> And it's tested too, apparently.
> >>>
> >>> Okay, well, how disappointing is this below? Because I'm running out of
> >>> tricks for flag reuse.
> >>>
> >>> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> >>> index b9e914e1face..c1ea49a7f198 100644
> >>> --- a/include/linux/page-flags.h
> >>> +++ b/include/linux/page-flags.h
> >>> @@ -110,6 +110,7 @@ enum pageflags {
> >>> PG_workingset,
> >>> PG_error,
> >>> PG_owner_priv_1, /* Owner use. If pagecache, fs may use*/
> >>> + PG_owner_priv_2,
> >>
> >> Oh no, no new page flags please :)
> >>
> >> Maybe just follow what Linux suggested: pass vma to pte_dirty() and
> >> always return false for these special VMAs.
> >
> > ... or look into removing that one case that gives us headake.
> >
> > No idea what would happen if we do the following:
> >
> > CCing Yu Zhao.
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 0761f91b407f..d1dfbd4fd38d 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -4280,14 +4280,9 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
> > return true;
> > }
> >
> > - /* dirty lazyfree */
> > - if (type == LRU_GEN_FILE && folio_test_anon(folio) && folio_test_dirty(folio)) {
> > - success = lru_gen_del_folio(lruvec, folio, true);
> > - VM_WARN_ON_ONCE_FOLIO(!success, folio);
> > - folio_set_swapbacked(folio);
> > - lruvec_add_folio_tail(lruvec, folio);
> > - return true;
> > - }
> > + /* lazyfree: we may not be allowed to set swapbacked: MAP_DROPPABLE */
> > + if (type == LRU_GEN_FILE && folio_test_anon(folio) && folio_test_dirty(folio))
> > + return false;
This is an optimization to avoid an unnecessary trip to
shrink_folio_list(), so it's safe to delete the entire 'if' block, and
that would be preferable than leaving a dangling 'if'.
> Note that something is unclear to me: are we maybe running into that
> code also if folio_set_swapbacked() is already set and we are not in the
> lazyfree path (in contrast to what is documented)?
Not sure what you mean: either rmap sees pte_dirty() and does
folio_mark_dirty() and then folio_set_swapbacked(); or MGLRU does the
same sequence, with the first two steps in walk_pte_range() and the
last one here.
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 19:49 ` Yu Zhao
@ 2024-07-11 19:52 ` Yu Zhao
2024-07-11 19:53 ` David Hildenbrand
2024-07-11 20:20 ` Jason A. Donenfeld
2 siblings, 0 replies; 39+ messages in thread
From: Yu Zhao @ 2024-07-11 19:52 UTC (permalink / raw)
To: David Hildenbrand
Cc: Jason A. Donenfeld, Linus Torvalds, linux-kernel, patches, tglx,
linux-crypto, linux-api, x86, Greg Kroah-Hartman,
Adhemerval Zanella Netto, Carlos O'Donell, Florian Weimer,
Arnd Bergmann, Jann Horn, Christian Brauner, David Hildenbrand,
linux-mm
On Thu, Jul 11, 2024 at 1:49 PM Yu Zhao <yuzhao@google.com> wrote:
>
> On Thu, Jul 11, 2024 at 1:20 PM David Hildenbrand <david@redhat.com> wrote:
> >
> > On 11.07.24 21:18, David Hildenbrand wrote:
> > > On 11.07.24 20:56, David Hildenbrand wrote:
> > >> On 11.07.24 20:54, Jason A. Donenfeld wrote:
> > >>> On Thu, Jul 11, 2024 at 08:24:07PM +0200, David Hildenbrand wrote:
> > >>>>> And PG_large_rmappable seems to only be used for hugetlb branches.
> > >>>>
> > >>>> It should be set for THP/large folios.
> > >>>
> > >>> And it's tested too, apparently.
> > >>>
> > >>> Okay, well, how disappointing is this below? Because I'm running out of
> > >>> tricks for flag reuse.
> > >>>
> > >>> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> > >>> index b9e914e1face..c1ea49a7f198 100644
> > >>> --- a/include/linux/page-flags.h
> > >>> +++ b/include/linux/page-flags.h
> > >>> @@ -110,6 +110,7 @@ enum pageflags {
> > >>> PG_workingset,
> > >>> PG_error,
> > >>> PG_owner_priv_1, /* Owner use. If pagecache, fs may use*/
> > >>> + PG_owner_priv_2,
> > >>
> > >> Oh no, no new page flags please :)
> > >>
> > >> Maybe just follow what Linux suggested: pass vma to pte_dirty() and
> > >> always return false for these special VMAs.
> > >
> > > ... or look into removing that one case that gives us headake.
> > >
> > > No idea what would happen if we do the following:
> > >
> > > CCing Yu Zhao.
> > >
> > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > > index 0761f91b407f..d1dfbd4fd38d 100644
> > > --- a/mm/vmscan.c
> > > +++ b/mm/vmscan.c
> > > @@ -4280,14 +4280,9 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
> > > return true;
> > > }
> > >
> > > - /* dirty lazyfree */
> > > - if (type == LRU_GEN_FILE && folio_test_anon(folio) && folio_test_dirty(folio)) {
> > > - success = lru_gen_del_folio(lruvec, folio, true);
> > > - VM_WARN_ON_ONCE_FOLIO(!success, folio);
> > > - folio_set_swapbacked(folio);
> > > - lruvec_add_folio_tail(lruvec, folio);
> > > - return true;
> > > - }
> > > + /* lazyfree: we may not be allowed to set swapbacked: MAP_DROPPABLE */
> > > + if (type == LRU_GEN_FILE && folio_test_anon(folio) && folio_test_dirty(folio))
> > > + return false;
>
> This is an optimization to avoid an unnecessary trip to
> shrink_folio_list(), so it's safe to delete the entire 'if' block, and
> that would be preferable than leaving a dangling 'if'.
>
> > Note that something is unclear to me: are we maybe running into that
> > code also if folio_set_swapbacked() is already set and we are not in the
> > lazyfree path (in contrast to what is documented)?
>
> Not sure what you mean: either rmap sees pte_dirty() and does
> folio_mark_dirty() and then folio_set_swapbacked(); or MGLRU does the
> same sequence, with the first two steps in walk_pte_range() and the
> last one here.
Rationale: rmap is expensive (cache unfriendly) and MGLRU tries to
avoid using it.
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 19:49 ` Yu Zhao
2024-07-11 19:52 ` Yu Zhao
@ 2024-07-11 19:53 ` David Hildenbrand
2024-07-11 19:58 ` Yu Zhao
2024-07-11 20:20 ` Jason A. Donenfeld
2 siblings, 1 reply; 39+ messages in thread
From: David Hildenbrand @ 2024-07-11 19:53 UTC (permalink / raw)
To: Yu Zhao
Cc: Jason A. Donenfeld, Linus Torvalds, linux-kernel, patches, tglx,
linux-crypto, linux-api, x86, Greg Kroah-Hartman,
Adhemerval Zanella Netto, Carlos O'Donell, Florian Weimer,
Arnd Bergmann, Jann Horn, Christian Brauner, David Hildenbrand,
linux-mm
On 11.07.24 21:49, Yu Zhao wrote:
> On Thu, Jul 11, 2024 at 1:20 PM David Hildenbrand <david@redhat.com> wrote:
>>
>> On 11.07.24 21:18, David Hildenbrand wrote:
>>> On 11.07.24 20:56, David Hildenbrand wrote:
>>>> On 11.07.24 20:54, Jason A. Donenfeld wrote:
>>>>> On Thu, Jul 11, 2024 at 08:24:07PM +0200, David Hildenbrand wrote:
>>>>>>> And PG_large_rmappable seems to only be used for hugetlb branches.
>>>>>>
>>>>>> It should be set for THP/large folios.
>>>>>
>>>>> And it's tested too, apparently.
>>>>>
>>>>> Okay, well, how disappointing is this below? Because I'm running out of
>>>>> tricks for flag reuse.
>>>>>
>>>>> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
>>>>> index b9e914e1face..c1ea49a7f198 100644
>>>>> --- a/include/linux/page-flags.h
>>>>> +++ b/include/linux/page-flags.h
>>>>> @@ -110,6 +110,7 @@ enum pageflags {
>>>>> PG_workingset,
>>>>> PG_error,
>>>>> PG_owner_priv_1, /* Owner use. If pagecache, fs may use*/
>>>>> + PG_owner_priv_2,
>>>>
>>>> Oh no, no new page flags please :)
>>>>
>>>> Maybe just follow what Linux suggested: pass vma to pte_dirty() and
>>>> always return false for these special VMAs.
>>>
>>> ... or look into removing that one case that gives us headake.
>>>
>>> No idea what would happen if we do the following:
>>>
>>> CCing Yu Zhao.
>>>
>>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>>> index 0761f91b407f..d1dfbd4fd38d 100644
>>> --- a/mm/vmscan.c
>>> +++ b/mm/vmscan.c
>>> @@ -4280,14 +4280,9 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
>>> return true;
>>> }
>>>
>>> - /* dirty lazyfree */
>>> - if (type == LRU_GEN_FILE && folio_test_anon(folio) && folio_test_dirty(folio)) {
>>> - success = lru_gen_del_folio(lruvec, folio, true);
>>> - VM_WARN_ON_ONCE_FOLIO(!success, folio);
>>> - folio_set_swapbacked(folio);
>>> - lruvec_add_folio_tail(lruvec, folio);
>>> - return true;
>>> - }
>>> + /* lazyfree: we may not be allowed to set swapbacked: MAP_DROPPABLE */
>>> + if (type == LRU_GEN_FILE && folio_test_anon(folio) && folio_test_dirty(folio))
>>> + return false;
>
> This is an optimization to avoid an unnecessary trip to
> shrink_folio_list(), so it's safe to delete the entire 'if' block, and
> that would be preferable than leaving a dangling 'if'.
Great, thanks.
>
>> Note that something is unclear to me: are we maybe running into that
>> code also if folio_set_swapbacked() is already set and we are not in the
>> lazyfree path (in contrast to what is documented)?
>
> Not sure what you mean: either rmap sees pte_dirty() and does
> folio_mark_dirty() and then folio_set_swapbacked(); or MGLRU does the
> same sequence, with the first two steps in walk_pte_range() and the
> last one here.
Let me rephrase:
Checking for lazyfree is
"folio_test_anon(folio) && !folio_test_swapbacked(folio)"
Testing for dirtied lazyfree is
"folio_test_anon(folio) && !folio_test_swapbacked(folio) &&
folio_test)dirty(folio)"
So I'm wondering about the missing folio_test_swapbacked() test.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 19:53 ` David Hildenbrand
@ 2024-07-11 19:58 ` Yu Zhao
2024-07-11 20:59 ` David Hildenbrand
0 siblings, 1 reply; 39+ messages in thread
From: Yu Zhao @ 2024-07-11 19:58 UTC (permalink / raw)
To: David Hildenbrand
Cc: Jason A. Donenfeld, Linus Torvalds, linux-kernel, patches, tglx,
linux-crypto, linux-api, x86, Greg Kroah-Hartman,
Adhemerval Zanella Netto, Carlos O'Donell, Florian Weimer,
Arnd Bergmann, Jann Horn, Christian Brauner, David Hildenbrand,
linux-mm
On Thu, Jul 11, 2024 at 1:53 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 11.07.24 21:49, Yu Zhao wrote:
> > On Thu, Jul 11, 2024 at 1:20 PM David Hildenbrand <david@redhat.com> wrote:
> >>
> >> On 11.07.24 21:18, David Hildenbrand wrote:
> >>> On 11.07.24 20:56, David Hildenbrand wrote:
> >>>> On 11.07.24 20:54, Jason A. Donenfeld wrote:
> >>>>> On Thu, Jul 11, 2024 at 08:24:07PM +0200, David Hildenbrand wrote:
> >>>>>>> And PG_large_rmappable seems to only be used for hugetlb branches.
> >>>>>>
> >>>>>> It should be set for THP/large folios.
> >>>>>
> >>>>> And it's tested too, apparently.
> >>>>>
> >>>>> Okay, well, how disappointing is this below? Because I'm running out of
> >>>>> tricks for flag reuse.
> >>>>>
> >>>>> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> >>>>> index b9e914e1face..c1ea49a7f198 100644
> >>>>> --- a/include/linux/page-flags.h
> >>>>> +++ b/include/linux/page-flags.h
> >>>>> @@ -110,6 +110,7 @@ enum pageflags {
> >>>>> PG_workingset,
> >>>>> PG_error,
> >>>>> PG_owner_priv_1, /* Owner use. If pagecache, fs may use*/
> >>>>> + PG_owner_priv_2,
> >>>>
> >>>> Oh no, no new page flags please :)
> >>>>
> >>>> Maybe just follow what Linux suggested: pass vma to pte_dirty() and
> >>>> always return false for these special VMAs.
> >>>
> >>> ... or look into removing that one case that gives us headake.
> >>>
> >>> No idea what would happen if we do the following:
> >>>
> >>> CCing Yu Zhao.
> >>>
> >>> diff --git a/mm/vmscan.c b/mm/vmscan.c
> >>> index 0761f91b407f..d1dfbd4fd38d 100644
> >>> --- a/mm/vmscan.c
> >>> +++ b/mm/vmscan.c
> >>> @@ -4280,14 +4280,9 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
> >>> return true;
> >>> }
> >>>
> >>> - /* dirty lazyfree */
> >>> - if (type == LRU_GEN_FILE && folio_test_anon(folio) && folio_test_dirty(folio)) {
> >>> - success = lru_gen_del_folio(lruvec, folio, true);
> >>> - VM_WARN_ON_ONCE_FOLIO(!success, folio);
> >>> - folio_set_swapbacked(folio);
> >>> - lruvec_add_folio_tail(lruvec, folio);
> >>> - return true;
> >>> - }
> >>> + /* lazyfree: we may not be allowed to set swapbacked: MAP_DROPPABLE */
> >>> + if (type == LRU_GEN_FILE && folio_test_anon(folio) && folio_test_dirty(folio))
> >>> + return false;
> >
> > This is an optimization to avoid an unnecessary trip to
> > shrink_folio_list(), so it's safe to delete the entire 'if' block, and
> > that would be preferable than leaving a dangling 'if'.
>
> Great, thanks.
>
> >
> >> Note that something is unclear to me: are we maybe running into that
> >> code also if folio_set_swapbacked() is already set and we are not in the
> >> lazyfree path (in contrast to what is documented)?
> >
> > Not sure what you mean: either rmap sees pte_dirty() and does
> > folio_mark_dirty() and then folio_set_swapbacked(); or MGLRU does the
> > same sequence, with the first two steps in walk_pte_range() and the
> > last one here.
>
> Let me rephrase:
>
> Checking for lazyfree is
>
> "folio_test_anon(folio) && !folio_test_swapbacked(folio)"
>
> Testing for dirtied lazyfree is
>
> "folio_test_anon(folio) && !folio_test_swapbacked(folio) &&
> folio_test)dirty(folio)"
>
> So I'm wondering about the missing folio_test_swapbacked() test.
It's not missing: type == LRU_GEN_FILE means folio_is_file_lru(),
which in turn means !folio_test_swapbacked().
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 17:57 ` Linus Torvalds
2024-07-11 19:07 ` David Hildenbrand
@ 2024-07-11 20:07 ` Jason A. Donenfeld
2024-07-11 20:17 ` Jason A. Donenfeld
1 sibling, 1 reply; 39+ messages in thread
From: Jason A. Donenfeld @ 2024-07-11 20:07 UTC (permalink / raw)
To: Linus Torvalds
Cc: David Hildenbrand, linux-kernel, patches, tglx, linux-crypto,
linux-api, x86, Greg Kroah-Hartman, Adhemerval Zanella Netto,
Carlos O'Donell, Florian Weimer, Arnd Bergmann, Jann Horn,
Christian Brauner, David Hildenbrand, linux-mm
Hi Linus,
On Thu, Jul 11, 2024 at 10:57:17AM -0700, Linus Torvalds wrote:
> May I suggest a slightly different approach: do what we did for "pte_mkwrite()".
>
> It needed the vma too, for not too dissimilar reasons: special dirty
> bit handling for the shadow stack. See
Thanks for the suggestion. That seems pretty clean.
It still needs to avoid setting swapbacked in the first place, but
ensuring that it's never dirty means it won't get turned back on.
The first patch renames pte_dirty() to pte_dirty_novma(). The second
patch adds an inline function, pte_dirty(pte, vma) that just forwards
the pte to pte_dirty_novma(), and then converts callers that have a vma
available to pass to call pte_dirty(). And then the VM_DROPPABLE patch
simply adds the `&& !(vma->vm_flags & VM_DROPPABLE)` condition to
pte_dirty().
I put these in https://git.zx2c4.com/linux-rng/log/ per usual, and I'll
post a new version to the list not before long (unless objections).
Jason
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 20:07 ` Jason A. Donenfeld
@ 2024-07-11 20:17 ` Jason A. Donenfeld
0 siblings, 0 replies; 39+ messages in thread
From: Jason A. Donenfeld @ 2024-07-11 20:17 UTC (permalink / raw)
To: Linus Torvalds
Cc: David Hildenbrand, linux-kernel, patches, tglx, linux-crypto,
linux-api, x86, Greg Kroah-Hartman, Adhemerval Zanella Netto,
Carlos O'Donell, Florian Weimer, Arnd Bergmann, Jann Horn,
Christian Brauner, David Hildenbrand, linux-mm
On Thu, Jul 11, 2024 at 10:07:30PM +0200, Jason A. Donenfeld wrote:
> Hi Linus,
>
> On Thu, Jul 11, 2024 at 10:57:17AM -0700, Linus Torvalds wrote:
> > May I suggest a slightly different approach: do what we did for "pte_mkwrite()".
> >
> > It needed the vma too, for not too dissimilar reasons: special dirty
> > bit handling for the shadow stack. See
>
> Thanks for the suggestion. That seems pretty clean.
>
> It still needs to avoid setting swapbacked in the first place, but
> ensuring that it's never dirty means it won't get turned back on.
>
> The first patch renames pte_dirty() to pte_dirty_novma(). The second
> patch adds an inline function, pte_dirty(pte, vma) that just forwards
> the pte to pte_dirty_novma(), and then converts callers that have a vma
> available to pass to call pte_dirty(). And then the VM_DROPPABLE patch
> simply adds the `&& !(vma->vm_flags & VM_DROPPABLE)` condition to
> pte_dirty().
>
> I put these in https://git.zx2c4.com/linux-rng/log/ per usual, and I'll
> post a new version to the list not before long (unless objections).
Oh, I didn't catch upthread in time (my mail flow is based on `lei
up`, which I guess I should run at greater frequency). It seems like we
apparently might go in a different direction.
I'll move that to https://git.zx2c4.com/linux-rng/log/?h=jd/pte_dirty in
case it's useful later, though.
Jason
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 19:49 ` Yu Zhao
2024-07-11 19:52 ` Yu Zhao
2024-07-11 19:53 ` David Hildenbrand
@ 2024-07-11 20:20 ` Jason A. Donenfeld
2024-07-11 20:59 ` David Hildenbrand
2 siblings, 1 reply; 39+ messages in thread
From: Jason A. Donenfeld @ 2024-07-11 20:20 UTC (permalink / raw)
To: Yu Zhao
Cc: David Hildenbrand, Linus Torvalds, linux-kernel, patches, tglx,
linux-crypto, linux-api, x86, Greg Kroah-Hartman,
Adhemerval Zanella Netto, Carlos O'Donell, Florian Weimer,
Arnd Bergmann, Jann Horn, Christian Brauner, David Hildenbrand,
linux-mm
Hi David,
On Thu, Jul 11, 2024 at 01:49:42PM -0600, Yu Zhao wrote:
> On Thu, Jul 11, 2024 at 1:20 PM David Hildenbrand <david@redhat.com> wrote:
> > > - /* dirty lazyfree */
> > > - if (type == LRU_GEN_FILE && folio_test_anon(folio) && folio_test_dirty(folio)) {
> > > - success = lru_gen_del_folio(lruvec, folio, true);
> > > - VM_WARN_ON_ONCE_FOLIO(!success, folio);
> > > - folio_set_swapbacked(folio);
> > > - lruvec_add_folio_tail(lruvec, folio);
> > > - return true;
> > > - }
> This is an optimization to avoid an unnecessary trip to
> shrink_folio_list(), so it's safe to delete the entire 'if' block, and
> that would be preferable than leaving a dangling 'if'.
Alright, I'll just remove that entire chunk then, for v+1 of this patch?
That sounds prettttty okay.
Jason
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 19:58 ` Yu Zhao
@ 2024-07-11 20:59 ` David Hildenbrand
0 siblings, 0 replies; 39+ messages in thread
From: David Hildenbrand @ 2024-07-11 20:59 UTC (permalink / raw)
To: Yu Zhao
Cc: Jason A. Donenfeld, Linus Torvalds, linux-kernel, patches, tglx,
linux-crypto, linux-api, x86, Greg Kroah-Hartman,
Adhemerval Zanella Netto, Carlos O'Donell, Florian Weimer,
Arnd Bergmann, Jann Horn, Christian Brauner, David Hildenbrand,
linux-mm
On 11.07.24 21:58, Yu Zhao wrote:
> On Thu, Jul 11, 2024 at 1:53 PM David Hildenbrand <david@redhat.com> wrote:
>>
>> On 11.07.24 21:49, Yu Zhao wrote:
>>> On Thu, Jul 11, 2024 at 1:20 PM David Hildenbrand <david@redhat.com> wrote:
>>>>
>>>> On 11.07.24 21:18, David Hildenbrand wrote:
>>>>> On 11.07.24 20:56, David Hildenbrand wrote:
>>>>>> On 11.07.24 20:54, Jason A. Donenfeld wrote:
>>>>>>> On Thu, Jul 11, 2024 at 08:24:07PM +0200, David Hildenbrand wrote:
>>>>>>>>> And PG_large_rmappable seems to only be used for hugetlb branches.
>>>>>>>>
>>>>>>>> It should be set for THP/large folios.
>>>>>>>
>>>>>>> And it's tested too, apparently.
>>>>>>>
>>>>>>> Okay, well, how disappointing is this below? Because I'm running out of
>>>>>>> tricks for flag reuse.
>>>>>>>
>>>>>>> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
>>>>>>> index b9e914e1face..c1ea49a7f198 100644
>>>>>>> --- a/include/linux/page-flags.h
>>>>>>> +++ b/include/linux/page-flags.h
>>>>>>> @@ -110,6 +110,7 @@ enum pageflags {
>>>>>>> PG_workingset,
>>>>>>> PG_error,
>>>>>>> PG_owner_priv_1, /* Owner use. If pagecache, fs may use*/
>>>>>>> + PG_owner_priv_2,
>>>>>>
>>>>>> Oh no, no new page flags please :)
>>>>>>
>>>>>> Maybe just follow what Linux suggested: pass vma to pte_dirty() and
>>>>>> always return false for these special VMAs.
>>>>>
>>>>> ... or look into removing that one case that gives us headake.
>>>>>
>>>>> No idea what would happen if we do the following:
>>>>>
>>>>> CCing Yu Zhao.
>>>>>
>>>>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>>>>> index 0761f91b407f..d1dfbd4fd38d 100644
>>>>> --- a/mm/vmscan.c
>>>>> +++ b/mm/vmscan.c
>>>>> @@ -4280,14 +4280,9 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
>>>>> return true;
>>>>> }
>>>>>
>>>>> - /* dirty lazyfree */
>>>>> - if (type == LRU_GEN_FILE && folio_test_anon(folio) && folio_test_dirty(folio)) {
>>>>> - success = lru_gen_del_folio(lruvec, folio, true);
>>>>> - VM_WARN_ON_ONCE_FOLIO(!success, folio);
>>>>> - folio_set_swapbacked(folio);
>>>>> - lruvec_add_folio_tail(lruvec, folio);
>>>>> - return true;
>>>>> - }
>>>>> + /* lazyfree: we may not be allowed to set swapbacked: MAP_DROPPABLE */
>>>>> + if (type == LRU_GEN_FILE && folio_test_anon(folio) && folio_test_dirty(folio))
>>>>> + return false;
>>>
>>> This is an optimization to avoid an unnecessary trip to
>>> shrink_folio_list(), so it's safe to delete the entire 'if' block, and
>>> that would be preferable than leaving a dangling 'if'.
>>
>> Great, thanks.
>>
>>>
>>>> Note that something is unclear to me: are we maybe running into that
>>>> code also if folio_set_swapbacked() is already set and we are not in the
>>>> lazyfree path (in contrast to what is documented)?
>>>
>>> Not sure what you mean: either rmap sees pte_dirty() and does
>>> folio_mark_dirty() and then folio_set_swapbacked(); or MGLRU does the
>>> same sequence, with the first two steps in walk_pte_range() and the
>>> last one here.
>>
>> Let me rephrase:
>>
>> Checking for lazyfree is
>>
>> "folio_test_anon(folio) && !folio_test_swapbacked(folio)"
>>
>> Testing for dirtied lazyfree is
>>
>> "folio_test_anon(folio) && !folio_test_swapbacked(folio) &&
>> folio_test)dirty(folio)"
>>
>> So I'm wondering about the missing folio_test_swapbacked() test.
>
> It's not missing: type == LRU_GEN_FILE means folio_is_file_lru(),
> which in turn means !folio_test_swapbacked().
>
Ahh, got it, thanks!
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 20:20 ` Jason A. Donenfeld
@ 2024-07-11 20:59 ` David Hildenbrand
0 siblings, 0 replies; 39+ messages in thread
From: David Hildenbrand @ 2024-07-11 20:59 UTC (permalink / raw)
To: Jason A. Donenfeld, Yu Zhao
Cc: Linus Torvalds, linux-kernel, patches, tglx, linux-crypto,
linux-api, x86, Greg Kroah-Hartman, Adhemerval Zanella Netto,
Carlos O'Donell, Florian Weimer, Arnd Bergmann, Jann Horn,
Christian Brauner, David Hildenbrand, linux-mm
On 11.07.24 22:20, Jason A. Donenfeld wrote:
> Hi David,
>
> On Thu, Jul 11, 2024 at 01:49:42PM -0600, Yu Zhao wrote:
>> On Thu, Jul 11, 2024 at 1:20 PM David Hildenbrand <david@redhat.com> wrote:
>>>> - /* dirty lazyfree */
>>>> - if (type == LRU_GEN_FILE && folio_test_anon(folio) && folio_test_dirty(folio)) {
>>>> - success = lru_gen_del_folio(lruvec, folio, true);
>>>> - VM_WARN_ON_ONCE_FOLIO(!success, folio);
>>>> - folio_set_swapbacked(folio);
>>>> - lruvec_add_folio_tail(lruvec, folio);
>>>> - return true;
>>>> - }
>
>> This is an optimization to avoid an unnecessary trip to
>> shrink_folio_list(), so it's safe to delete the entire 'if' block, and
>> that would be preferable than leaving a dangling 'if'.
>
> Alright, I'll just remove that entire chunk then, for v+1 of this patch?
> That sounds prettttty okay.
Yes!
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-10 3:27 ` David Hildenbrand
2024-07-10 4:05 ` David Hildenbrand
@ 2024-07-11 22:29 ` David Hildenbrand
2024-07-12 1:21 ` Jason A. Donenfeld
1 sibling, 1 reply; 39+ messages in thread
From: David Hildenbrand @ 2024-07-11 22:29 UTC (permalink / raw)
To: Jason A. Donenfeld, linux-kernel, patches, tglx
Cc: linux-crypto, linux-api, x86, Linus Torvalds, Greg Kroah-Hartman,
Adhemerval Zanella Netto, Carlos O'Donell, Florian Weimer,
Arnd Bergmann, Jann Horn, Christian Brauner, David Hildenbrand,
linux-mm
On 10.07.24 05:27, David Hildenbrand wrote:
> On 09.07.24 15:05, Jason A. Donenfeld wrote:
>> The vDSO getrandom() implementation works with a buffer allocated with a
>> new system call that has certain requirements:
>>
>> - It shouldn't be written to core dumps.
>> * Easy: VM_DONTDUMP.
>> - It should be zeroed on fork.
>> * Easy: VM_WIPEONFORK.
>>
>> - It shouldn't be written to swap.
>> * Uh-oh: mlock is rlimited.
>> * Uh-oh: mlock isn't inherited by forks.
>>
>> It turns out that the vDSO getrandom() function has three really nice
>> characteristics that we can exploit to solve this problem:
>>
>> 1) Due to being wiped during fork(), the vDSO code is already robust to
>> having the contents of the pages it reads zeroed out midway through
>> the function's execution.
>>
>> 2) In the absolute worst case of whatever contingency we're coding for,
>> we have the option to fallback to the getrandom() syscall, and
>> everything is fine.
>>
>> 3) The buffers the function uses are only ever useful for a maximum of
>> 60 seconds -- a sort of cache, rather than a long term allocation.
>>
>> These characteristics mean that we can introduce VM_DROPPABLE, which
>> has the following semantics:
>>
>> a) It never is written out to swap.
>> b) Under memory pressure, mm can just drop the pages (so that they're
>> zero when read back again).
>> c) It is inherited by fork.
>> d) It doesn't count against the mlock budget, since nothing is locked.
>>
>> This is fairly simple to implement, with the one snag that we have to
>> use 64-bit VM_* flags, but this shouldn't be a problem, since the only
>> consumers will probably be 64-bit anyway.
>>
>> This way, allocations used by vDSO getrandom() can use:
>>
>> VM_DROPPABLE | VM_DONTDUMP | VM_WIPEONFORK | VM_NORESERVE
>>
>> And there will be no problem with using memory when not in use, not
>> wiping on fork(), coredumps, or writing out to swap.
>>
>> In order to let vDSO getrandom() use this, expose these via mmap(2) as
>> MAP_DROPPABLE.
>>
>> Finally, the provided self test ensures that this is working as desired.
>
> Acked-by: David Hildenbrand <david@redhat.com>
>
>
> I'll try to think of some corner cases we might be missing.
Sorry that I keep coming up with corner cases :) But these should be easy to handle:
1) We should disallow KSM.
diff --git a/mm/ksm.c b/mm/ksm.c
index df6bae3a5a2c..d6744183ba41 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -713,7 +713,7 @@ static bool vma_ksm_compatible(struct vm_area_struct *vma)
{
if (vma->vm_flags & (VM_SHARED | VM_MAYSHARE | VM_PFNMAP |
VM_IO | VM_DONTEXPAND | VM_HUGETLB |
- VM_MIXEDMAP))
+ VM_MIXEDMAP | VM_DROPPABLE))
return false; /* just ignore the advice */
if (vma_is_dax(vma))
We don't want to suddenly get pages that are swapbacked.
2) We should disable userfaultfd
diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h
index 05d59f74fc88..a12bcf042551 100644
--- a/include/linux/userfaultfd_k.h
+++ b/include/linux/userfaultfd_k.h
@@ -218,6 +218,9 @@ static inline bool vma_can_userfault(struct vm_area_struct *vma,
{
vm_flags &= __VM_UFFD_FLAGS;
+ if (vm_flags & VM_DROPPABLE)
+ return false;
+
if ((vm_flags & VM_UFFD_MINOR) &&
(!is_vm_hugetlb_page(vma) && !vma_is_shmem(vma)))
return false;
Otherwise someone could place swapbacked pages in there (using UFFDIO_MOVE)
I think. But conceptually, I don't think userfaultfd might not make sense at
all with uffd. And if there are good reasons for it in the future, we could
enable the parts that make sense.
I think other places like khugepaged should handle it correctly (not set
swapbacked) due to your changes to folio_add_new_anon_rmap().
--
Cheers,
David / dhildenb
^ permalink raw reply related [flat|nested] 39+ messages in thread
* Re: [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings
2024-07-11 22:29 ` David Hildenbrand
@ 2024-07-12 1:21 ` Jason A. Donenfeld
0 siblings, 0 replies; 39+ messages in thread
From: Jason A. Donenfeld @ 2024-07-12 1:21 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, patches, tglx, linux-crypto, linux-api, x86,
Linus Torvalds, Greg Kroah-Hartman, Adhemerval Zanella Netto,
Carlos O'Donell, Florian Weimer, Arnd Bergmann, Jann Horn,
Christian Brauner, David Hildenbrand, linux-mm
On Fri, Jul 12, 2024 at 12:29:17AM +0200, David Hildenbrand wrote:
> > I'll try to think of some corner cases we might be missing.
>
> Sorry that I keep coming up with corner cases :) But these should be easy to handle:
Thank you for coming up with them!
> We don't want to suddenly get pages that are swapbacked.
> Otherwise someone could place swapbacked pages in there (using UFFDIO_MOVE)
Both seem like reasonable concerns. Added to v+1.
Jason
^ permalink raw reply [flat|nested] 39+ messages in thread
end of thread, other threads:[~2024-07-12 1:22 UTC | newest]
Thread overview: 39+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-07-09 13:05 [PATCH v22 0/4] implement getrandom() in vDSO Jason A. Donenfeld
2024-07-09 13:05 ` [PATCH v22 1/4] mm: add MAP_DROPPABLE for designating always lazily freeable mappings Jason A. Donenfeld
2024-07-10 3:27 ` David Hildenbrand
2024-07-10 4:05 ` David Hildenbrand
2024-07-11 0:44 ` Jason A. Donenfeld
2024-07-11 4:32 ` Jason A. Donenfeld
2024-07-11 4:46 ` David Hildenbrand
2024-07-11 5:07 ` Linus Torvalds
2024-07-11 17:09 ` Jason A. Donenfeld
2024-07-11 17:17 ` Jason A. Donenfeld
2024-07-11 17:24 ` David Hildenbrand
2024-07-11 17:27 ` David Hildenbrand
2024-07-11 17:54 ` Jason A. Donenfeld
2024-07-11 17:56 ` Jason A. Donenfeld
2024-07-11 18:08 ` Jason A. Donenfeld
2024-07-11 18:24 ` David Hildenbrand
2024-07-11 18:54 ` Jason A. Donenfeld
2024-07-11 18:56 ` David Hildenbrand
2024-07-11 19:18 ` David Hildenbrand
2024-07-11 19:20 ` David Hildenbrand
2024-07-11 19:49 ` Yu Zhao
2024-07-11 19:52 ` Yu Zhao
2024-07-11 19:53 ` David Hildenbrand
2024-07-11 19:58 ` Yu Zhao
2024-07-11 20:59 ` David Hildenbrand
2024-07-11 20:20 ` Jason A. Donenfeld
2024-07-11 20:59 ` David Hildenbrand
2024-07-11 17:49 ` Jason A. Donenfeld
2024-07-11 17:57 ` Linus Torvalds
2024-07-11 19:07 ` David Hildenbrand
2024-07-11 19:17 ` Linus Torvalds
2024-07-11 19:22 ` David Hildenbrand
2024-07-11 20:07 ` Jason A. Donenfeld
2024-07-11 20:17 ` Jason A. Donenfeld
2024-07-11 22:29 ` David Hildenbrand
2024-07-12 1:21 ` Jason A. Donenfeld
2024-07-09 13:05 ` [PATCH v22 2/4] random: introduce generic vDSO getrandom() implementation Jason A. Donenfeld
2024-07-09 13:05 ` [PATCH v22 3/4] x86: vdso: Wire up getrandom() vDSO implementation Jason A. Donenfeld
2024-07-09 13:05 ` [PATCH v22 4/4] selftests/vDSO: add tests for vgetrandom Jason A. Donenfeld
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).