rust-for-linux.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v13 0/8] Rust support for mm_struct, vm_area_struct, and mmap
@ 2025-02-03 12:14 Alice Ryhl
  2025-02-03 12:14 ` [PATCH v13 1/8] mm: rust: add abstraction for struct mm_struct Alice Ryhl
                   ` (7 more replies)
  0 siblings, 8 replies; 23+ messages in thread
From: Alice Ryhl @ 2025-02-03 12:14 UTC (permalink / raw)
  To: Miguel Ojeda, Matthew Wilcox, Lorenzo Stoakes, Vlastimil Babka,
	John Hubbard, Liam R. Howlett, Andrew Morton, Greg Kroah-Hartman,
	Arnd Bergmann, Jann Horn, Suren Baghdasaryan
  Cc: Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Trevor Gross, linux-kernel,
	linux-mm, rust-for-linux, Alice Ryhl, Balbir Singh

This updates the vm_area_struct support to use the approach we discussed
at LPC where there are several different Rust wrappers for
vm_area_struct depending on the kind of access you have to the vma. Each
case allows a different set of operations on the vma.

Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
Changes in v13:
- Rebase on v6.14-rc1.
- Remove casts that are now unnecessary due to improved Rust/C integer
  type mappings.
- Link to v12: https://lore.kernel.org/r/20250115-vma-v12-0-375099ae017a@google.com

Changes in v12:
- Add additional documentation to modules at the top of mm.rs and virt.rs.
- Explain why the name Mm is used in commit message of patch 1.
- Expand zap_page_range_single with docs suggested by Lorenzo.
- Update safety comment in vm_insert_page
- Mention that VmAreaMixedMap is identical to VmAreaRef except for VM_MIXEDMAP.
- Update docs for as_mixedmap_vma.
- Add additional docs for VmAreaNew struct.
- Rename `get_read` -> `readable` and equivalent for write/exec.
- Use mut pointers for `from_raw` for VMAs.
- Update safety comment for fops_mmap.
- Add additional docs for MiscDevice::mmap.
- Don't introduce and immediately delete mmgrab_current.
- Reduce active_pid_ns comment at Andreas's suggestion and link to get_pid_ns.
- Fix documentation test in rust/kernel/task.rs.
- Fix warning about unused variables in lock_vma_under_rcu when
  CONFIG_PER_VMA_LOCK=n.
- Fix minor typos.
- Link to v11: https://lore.kernel.org/r/20241211-vma-v11-0-466640428fc3@google.com

Changes in v11:
- Add accessor for the vm_mm field of vm_area_struct.
- Pass the file to MiscDevice::mmap for consistency with
  https://lore.kernel.org/r/20241210-miscdevice-file-param-v3-1-b2a79b666dc5@google.com
- Link to v10: https://lore.kernel.org/r/20241129-vma-v10-0-4dfff05ba927@google.com

Changes in v10:
- Update docs for `set_io`.
- Check address in `zap_page_range_single`.
- Completely redo the last patch.
- Link to v9: https://lore.kernel.org/r/20241122-vma-v9-0-7127bfcdd54e@google.com

Changes in v9:
- Be more explicit about VmAreaNew being used with f_ops->mmap().
- Point out that clearing VM_MAYWRITE is irreversible.
- Use __vm_flags to set the flags.
- Use as_ and into_ prefixes for conversions.
- Update lock_vma_under_rcu docs and commit msg
- Mention that VmAreaRef::end is exclusive.
- Reword docs for zap_page_range_single.
- Minor fixes to flag docs.
- Add way to access current->mm without a refcount increment.
- Link to v8: https://lore.kernel.org/r/20241120-vma-v8-0-eb31425da66b@google.com

Changes in v8:
- Split series into more commits to ease review.
- Improve read locks based on Lorenzo's doc: either the mmap or vma lock
  can be used.
- Get rid of mmap write lock because it's possible to avoid the need for
  it.
- Do not allow invalid flag combinations on VmAreaNew.
- Link to v7: https://lore.kernel.org/r/20241014-vma-v7-0-01e32f861195@google.com

Changes in v7:
- Make the mmap read/write lock guards respect strict owner semantics.
- Link to v6: https://lore.kernel.org/r/20241010-vma-v6-0-d89039b6f573@google.com

Changes in v6:
- Introduce VmArea{Ref,Mut,New} distinction.
- Add a second patchset for miscdevice.
- Rebase on char-misc-next (currently on v6.12-rc2).
- Link to v5: https://lore.kernel.org/r/20240806-vma-v5-1-04018f05de2b@google.com

Changes in v5:
- Rename VmArea::from_raw_vma to from_raw.
- Use Pin for mutable VmArea references.
- Go through `ARef::from` in `mmgrab_current`.
- Link to v4: https://lore.kernel.org/r/20240802-vma-v4-1-091a87058a43@google.com

Changes in v4:
- Pull out ARef::into_raw into a separate patch.
- Update invariants and struct documentation.
- Rename from_raw_mm to from_raw.
- Link to v3: https://lore.kernel.org/r/20240801-vma-v3-1-db6c1c0afda9@google.com

Changes in v3:
- Reorder entries in mm.rs.
- Use ARef for mmput_async helper.
- Clarify that VmArea requires you to hold the mmap read or write lock.
- Link to v2: https://lore.kernel.org/r/20240727-vma-v2-1-ab3e5927dc3a@google.com

Changes in v2:
- mm.rs is redesigned from scratch making use of AsRef
- Add notes about whether destructors may sleep
- Rename Area to VmArea
- Link to v1: https://lore.kernel.org/r/20240723-vma-v1-1-32ad5a0118ee@google.com

---
Alice Ryhl (8):
      mm: rust: add abstraction for struct mm_struct
      mm: rust: add vm_area_struct methods that require read access
      mm: rust: add vm_insert_page
      mm: rust: add lock_vma_under_rcu
      mm: rust: add mmput_async support
      mm: rust: add VmAreaNew for f_ops->mmap()
      rust: miscdevice: add mmap support
      task: rust: rework how current is accessed

 rust/helpers/helpers.c    |   1 +
 rust/helpers/mm.c         |  50 +++++
 rust/kernel/lib.rs        |   1 +
 rust/kernel/miscdevice.rs |  44 +++++
 rust/kernel/mm.rs         | 339 +++++++++++++++++++++++++++++++++
 rust/kernel/mm/virt.rs    | 476 ++++++++++++++++++++++++++++++++++++++++++++++
 rust/kernel/task.rs       | 247 ++++++++++++------------
 7 files changed, 1040 insertions(+), 118 deletions(-)
---
base-commit: 2014c95afecee3e76ca4a56956a936e23283f05b
change-id: 20240723-vma-f80119f9fb35

Best regards,
-- 
Alice Ryhl <aliceryhl@google.com>


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v13 1/8] mm: rust: add abstraction for struct mm_struct
  2025-02-03 12:14 [PATCH v13 0/8] Rust support for mm_struct, vm_area_struct, and mmap Alice Ryhl
@ 2025-02-03 12:14 ` Alice Ryhl
  2025-02-03 12:14 ` [PATCH v13 2/8] mm: rust: add vm_area_struct methods that require read access Alice Ryhl
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 23+ messages in thread
From: Alice Ryhl @ 2025-02-03 12:14 UTC (permalink / raw)
  To: Miguel Ojeda, Matthew Wilcox, Lorenzo Stoakes, Vlastimil Babka,
	John Hubbard, Liam R. Howlett, Andrew Morton, Greg Kroah-Hartman,
	Arnd Bergmann, Jann Horn, Suren Baghdasaryan
  Cc: Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Trevor Gross, linux-kernel,
	linux-mm, rust-for-linux, Alice Ryhl, Balbir Singh

These abstractions allow you to reference a `struct mm_struct` using
both mmgrab and mmget refcounts. This is done using two Rust types:

* Mm - represents an mm_struct where you don't know anything about the
  value of mm_users.
* MmWithUser - represents an mm_struct where you know at compile time
  that mm_users is non-zero.

This allows us to encode in the type system whether a method requires
that mm_users is non-zero or not. For instance, you can always call
`mmget_not_zero` but you can only call `mmap_read_lock` when mm_users is
non-zero.

The struct is called Mm to keep consistency with the C side.

The ability to obtain `current->mm` is added later in this series.

Acked-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Acked-by: Balbir Singh <balbirs@nvidia.com>
Reviewed-by: Andreas Hindborg <a.hindborg@kernel.org>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
 rust/helpers/helpers.c |   1 +
 rust/helpers/mm.c      |  39 +++++++++
 rust/kernel/lib.rs     |   1 +
 rust/kernel/mm.rs      | 209 +++++++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 250 insertions(+)

diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
index 0640b7e115be..97cfc2d29f5e 100644
--- a/rust/helpers/helpers.c
+++ b/rust/helpers/helpers.c
@@ -18,6 +18,7 @@
 #include "io.c"
 #include "jump_label.c"
 #include "kunit.c"
+#include "mm.c"
 #include "mutex.c"
 #include "page.c"
 #include "platform.c"
diff --git a/rust/helpers/mm.c b/rust/helpers/mm.c
new file mode 100644
index 000000000000..7201747a5d31
--- /dev/null
+++ b/rust/helpers/mm.c
@@ -0,0 +1,39 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/mm.h>
+#include <linux/sched/mm.h>
+
+void rust_helper_mmgrab(struct mm_struct *mm)
+{
+	mmgrab(mm);
+}
+
+void rust_helper_mmdrop(struct mm_struct *mm)
+{
+	mmdrop(mm);
+}
+
+void rust_helper_mmget(struct mm_struct *mm)
+{
+	mmget(mm);
+}
+
+bool rust_helper_mmget_not_zero(struct mm_struct *mm)
+{
+	return mmget_not_zero(mm);
+}
+
+void rust_helper_mmap_read_lock(struct mm_struct *mm)
+{
+	mmap_read_lock(mm);
+}
+
+bool rust_helper_mmap_read_trylock(struct mm_struct *mm)
+{
+	return mmap_read_trylock(mm);
+}
+
+void rust_helper_mmap_read_unlock(struct mm_struct *mm)
+{
+	mmap_read_unlock(mm);
+}
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index 496ed32b0911..9cf35fbff356 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -57,6 +57,7 @@
 pub mod kunit;
 pub mod list;
 pub mod miscdevice;
+pub mod mm;
 #[cfg(CONFIG_NET)]
 pub mod net;
 pub mod of;
diff --git a/rust/kernel/mm.rs b/rust/kernel/mm.rs
new file mode 100644
index 000000000000..2fb5f440af60
--- /dev/null
+++ b/rust/kernel/mm.rs
@@ -0,0 +1,209 @@
+// SPDX-License-Identifier: GPL-2.0
+
+// Copyright (C) 2024 Google LLC.
+
+//! Memory management.
+//!
+//! This module deals with managing the address space of userspace processes. Each process has an
+//! instance of [`Mm`], which keeps track of multiple VMAs (virtual memory areas). Each VMA
+//! corresponds to a region of memory that the userspace process can access, and the VMA lets you
+//! control what happens when userspace reads or writes to that region of memory.
+//!
+//! C header: [`include/linux/mm.h`](srctree/include/linux/mm.h)
+
+use crate::{
+    bindings,
+    types::{ARef, AlwaysRefCounted, NotThreadSafe, Opaque},
+};
+use core::{ops::Deref, ptr::NonNull};
+
+/// A wrapper for the kernel's `struct mm_struct`.
+///
+/// This represents the address space of a userspace process, so each process has one `Mm`
+/// instance. It may hold many VMAs internally.
+///
+/// There is a counter called `mm_users` that counts the users of the address space; this includes
+/// the userspace process itself, but can also include kernel threads accessing the address space.
+/// Once `mm_users` reaches zero, this indicates that the address space can be destroyed. To access
+/// the address space, you must prevent `mm_users` from reaching zero while you are accessing it.
+/// The [`MmWithUser`] type represents an address space where this is guaranteed, and you can
+/// create one using [`mmget_not_zero`].
+///
+/// The `ARef<Mm>` smart pointer holds an `mmgrab` refcount. Its destructor may sleep.
+///
+/// # Invariants
+///
+/// Values of this type are always refcounted using `mmgrab`.
+///
+/// [`mmget_not_zero`]: Mm::mmget_not_zero
+#[repr(transparent)]
+pub struct Mm {
+    mm: Opaque<bindings::mm_struct>,
+}
+
+// SAFETY: It is safe to call `mmdrop` on another thread than where `mmgrab` was called.
+unsafe impl Send for Mm {}
+// SAFETY: All methods on `Mm` can be called in parallel from several threads.
+unsafe impl Sync for Mm {}
+
+// SAFETY: By the type invariants, this type is always refcounted.
+unsafe impl AlwaysRefCounted for Mm {
+    #[inline]
+    fn inc_ref(&self) {
+        // SAFETY: The pointer is valid since self is a reference.
+        unsafe { bindings::mmgrab(self.as_raw()) };
+    }
+
+    #[inline]
+    unsafe fn dec_ref(obj: NonNull<Self>) {
+        // SAFETY: The caller is giving up their refcount.
+        unsafe { bindings::mmdrop(obj.cast().as_ptr()) };
+    }
+}
+
+/// A wrapper for the kernel's `struct mm_struct`.
+///
+/// This type is like [`Mm`], but with non-zero `mm_users`. It can only be used when `mm_users` can
+/// be proven to be non-zero at compile-time, usually because the relevant code holds an `mmget`
+/// refcount. It can be used to access the associated address space.
+///
+/// The `ARef<MmWithUser>` smart pointer holds an `mmget` refcount. Its destructor may sleep.
+///
+/// # Invariants
+///
+/// Values of this type are always refcounted using `mmget`. The value of `mm_users` is non-zero.
+#[repr(transparent)]
+pub struct MmWithUser {
+    mm: Mm,
+}
+
+// SAFETY: It is safe to call `mmput` on another thread than where `mmget` was called.
+unsafe impl Send for MmWithUser {}
+// SAFETY: All methods on `MmWithUser` can be called in parallel from several threads.
+unsafe impl Sync for MmWithUser {}
+
+// SAFETY: By the type invariants, this type is always refcounted.
+unsafe impl AlwaysRefCounted for MmWithUser {
+    #[inline]
+    fn inc_ref(&self) {
+        // SAFETY: The pointer is valid since self is a reference.
+        unsafe { bindings::mmget(self.as_raw()) };
+    }
+
+    #[inline]
+    unsafe fn dec_ref(obj: NonNull<Self>) {
+        // SAFETY: The caller is giving up their refcount.
+        unsafe { bindings::mmput(obj.cast().as_ptr()) };
+    }
+}
+
+// Make all `Mm` methods available on `MmWithUser`.
+impl Deref for MmWithUser {
+    type Target = Mm;
+
+    #[inline]
+    fn deref(&self) -> &Mm {
+        &self.mm
+    }
+}
+
+// These methods are safe to call even if `mm_users` is zero.
+impl Mm {
+    /// Returns a raw pointer to the inner `mm_struct`.
+    #[inline]
+    pub fn as_raw(&self) -> *mut bindings::mm_struct {
+        self.mm.get()
+    }
+
+    /// Obtain a reference from a raw pointer.
+    ///
+    /// # Safety
+    ///
+    /// The caller must ensure that `ptr` points at an `mm_struct`, and that it is not deallocated
+    /// during the lifetime 'a.
+    #[inline]
+    pub unsafe fn from_raw<'a>(ptr: *const bindings::mm_struct) -> &'a Mm {
+        // SAFETY: Caller promises that the pointer is valid for 'a. Layouts are compatible due to
+        // repr(transparent).
+        unsafe { &*ptr.cast() }
+    }
+
+    /// Calls `mmget_not_zero` and returns a handle if it succeeds.
+    #[inline]
+    pub fn mmget_not_zero(&self) -> Option<ARef<MmWithUser>> {
+        // SAFETY: The pointer is valid since self is a reference.
+        let success = unsafe { bindings::mmget_not_zero(self.as_raw()) };
+
+        if success {
+            // SAFETY: We just created an `mmget` refcount.
+            Some(unsafe { ARef::from_raw(NonNull::new_unchecked(self.as_raw().cast())) })
+        } else {
+            None
+        }
+    }
+}
+
+// These methods require `mm_users` to be non-zero.
+impl MmWithUser {
+    /// Obtain a reference from a raw pointer.
+    ///
+    /// # Safety
+    ///
+    /// The caller must ensure that `ptr` points at an `mm_struct`, and that `mm_users` remains
+    /// non-zero for the duration of the lifetime 'a.
+    #[inline]
+    pub unsafe fn from_raw<'a>(ptr: *const bindings::mm_struct) -> &'a MmWithUser {
+        // SAFETY: Caller promises that the pointer is valid for 'a. The layout is compatible due
+        // to repr(transparent).
+        unsafe { &*ptr.cast() }
+    }
+
+    /// Lock the mmap read lock.
+    #[inline]
+    pub fn mmap_read_lock(&self) -> MmapReadGuard<'_> {
+        // SAFETY: The pointer is valid since self is a reference.
+        unsafe { bindings::mmap_read_lock(self.as_raw()) };
+
+        // INVARIANT: We just acquired the read lock.
+        MmapReadGuard {
+            mm: self,
+            _nts: NotThreadSafe,
+        }
+    }
+
+    /// Try to lock the mmap read lock.
+    #[inline]
+    pub fn mmap_read_trylock(&self) -> Option<MmapReadGuard<'_>> {
+        // SAFETY: The pointer is valid since self is a reference.
+        let success = unsafe { bindings::mmap_read_trylock(self.as_raw()) };
+
+        if success {
+            // INVARIANT: We just acquired the read lock.
+            Some(MmapReadGuard {
+                mm: self,
+                _nts: NotThreadSafe,
+            })
+        } else {
+            None
+        }
+    }
+}
+
+/// A guard for the mmap read lock.
+///
+/// # Invariants
+///
+/// This `MmapReadGuard` guard owns the mmap read lock.
+pub struct MmapReadGuard<'a> {
+    mm: &'a MmWithUser,
+    // `mmap_read_lock` and `mmap_read_unlock` must be called on the same thread
+    _nts: NotThreadSafe,
+}
+
+impl Drop for MmapReadGuard<'_> {
+    #[inline]
+    fn drop(&mut self) {
+        // SAFETY: We hold the read lock by the type invariants.
+        unsafe { bindings::mmap_read_unlock(self.mm.as_raw()) };
+    }
+}

-- 
2.48.1.362.g079036d154-goog


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v13 2/8] mm: rust: add vm_area_struct methods that require read access
  2025-02-03 12:14 [PATCH v13 0/8] Rust support for mm_struct, vm_area_struct, and mmap Alice Ryhl
  2025-02-03 12:14 ` [PATCH v13 1/8] mm: rust: add abstraction for struct mm_struct Alice Ryhl
@ 2025-02-03 12:14 ` Alice Ryhl
  2025-02-03 15:44   ` Liam R. Howlett
  2025-02-03 12:14 ` [PATCH v13 3/8] mm: rust: add vm_insert_page Alice Ryhl
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 23+ messages in thread
From: Alice Ryhl @ 2025-02-03 12:14 UTC (permalink / raw)
  To: Miguel Ojeda, Matthew Wilcox, Lorenzo Stoakes, Vlastimil Babka,
	John Hubbard, Liam R. Howlett, Andrew Morton, Greg Kroah-Hartman,
	Arnd Bergmann, Jann Horn, Suren Baghdasaryan
  Cc: Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Trevor Gross, linux-kernel,
	linux-mm, rust-for-linux, Alice Ryhl

This adds a type called VmAreaRef which is used when referencing a vma
that you have read access to. Here, read access means that you hold
either the mmap read lock or the vma read lock (or stronger).

Additionally, a vma_lookup method is added to the mmap read guard, which
enables you to obtain a &VmAreaRef in safe Rust code.

This patch only provides a way to lock the mmap read lock, but a
follow-up patch also provides a way to just lock the vma read lock.

Acked-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Jann Horn <jannh@google.com>
Reviewed-by: Andreas Hindborg <a.hindborg@kernel.org>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
 rust/helpers/mm.c      |   6 ++
 rust/kernel/mm.rs      |  21 +++++
 rust/kernel/mm/virt.rs | 215 +++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 242 insertions(+)

diff --git a/rust/helpers/mm.c b/rust/helpers/mm.c
index 7201747a5d31..7b72eb065a3e 100644
--- a/rust/helpers/mm.c
+++ b/rust/helpers/mm.c
@@ -37,3 +37,9 @@ void rust_helper_mmap_read_unlock(struct mm_struct *mm)
 {
 	mmap_read_unlock(mm);
 }
+
+struct vm_area_struct *rust_helper_vma_lookup(struct mm_struct *mm,
+					      unsigned long addr)
+{
+	return vma_lookup(mm, addr);
+}
diff --git a/rust/kernel/mm.rs b/rust/kernel/mm.rs
index 2fb5f440af60..bd6ff40f106f 100644
--- a/rust/kernel/mm.rs
+++ b/rust/kernel/mm.rs
@@ -17,6 +17,8 @@
 };
 use core::{ops::Deref, ptr::NonNull};
 
+pub mod virt;
+
 /// A wrapper for the kernel's `struct mm_struct`.
 ///
 /// This represents the address space of a userspace process, so each process has one `Mm`
@@ -200,6 +202,25 @@ pub struct MmapReadGuard<'a> {
     _nts: NotThreadSafe,
 }
 
+impl<'a> MmapReadGuard<'a> {
+    /// Look up a vma at the given address.
+    #[inline]
+    pub fn vma_lookup(&self, vma_addr: usize) -> Option<&virt::VmAreaRef> {
+        // SAFETY: We hold a reference to the mm, so the pointer must be valid. Any value is okay
+        // for `vma_addr`.
+        let vma = unsafe { bindings::vma_lookup(self.mm.as_raw(), vma_addr) };
+
+        if vma.is_null() {
+            None
+        } else {
+            // SAFETY: We just checked that a vma was found, so the pointer is valid. Furthermore,
+            // the returned area will borrow from this read lock guard, so it can only be used
+            // while the mmap read lock is still held.
+            unsafe { Some(virt::VmAreaRef::from_raw(vma)) }
+        }
+    }
+}
+
 impl Drop for MmapReadGuard<'_> {
     #[inline]
     fn drop(&mut self) {
diff --git a/rust/kernel/mm/virt.rs b/rust/kernel/mm/virt.rs
new file mode 100644
index 000000000000..dfe147cafdb3
--- /dev/null
+++ b/rust/kernel/mm/virt.rs
@@ -0,0 +1,215 @@
+// SPDX-License-Identifier: GPL-2.0
+
+// Copyright (C) 2024 Google LLC.
+
+//! Virtual memory.
+//!
+//! This module deals with managing a single VMA in the address space of a userspace process. Each
+//! VMA corresponds to a region of memory that the userspace process can access, and the VMA lets
+//! you control what happens when userspace reads or writes to that region of memory.
+//!
+//! The module has several different Rust types that all correspond to the C type called
+//! `vm_area_struct`. The different structs represent what kind of access you have to the VMA, e.g.
+//! [`VmAreaRef`] is used when you hold the mmap or vma read lock. Using the appropriate struct
+//! ensures that you can't, for example, accidentally call a function that requires holding the
+//! write lock when you only hold the read lock.
+
+use crate::{bindings, mm::MmWithUser, types::Opaque};
+
+/// A wrapper for the kernel's `struct vm_area_struct` with read access.
+///
+/// It represents an area of virtual memory.
+///
+/// # Invariants
+///
+/// The caller must hold the mmap read lock or the vma read lock.
+#[repr(transparent)]
+pub struct VmAreaRef {
+    vma: Opaque<bindings::vm_area_struct>,
+}
+
+// Methods you can call when holding the mmap or vma read lock (or stronger). They must be usable
+// no matter what the vma flags are.
+impl VmAreaRef {
+    /// Access a virtual memory area given a raw pointer.
+    ///
+    /// # Safety
+    ///
+    /// Callers must ensure that `vma` is valid for the duration of 'a, and that the mmap or vma
+    /// read lock (or stronger) is held for at least the duration of 'a.
+    #[inline]
+    pub unsafe fn from_raw<'a>(vma: *const bindings::vm_area_struct) -> &'a Self {
+        // SAFETY: The caller ensures that the invariants are satisfied for the duration of 'a.
+        unsafe { &*vma.cast() }
+    }
+
+    /// Returns a raw pointer to this area.
+    #[inline]
+    pub fn as_ptr(&self) -> *mut bindings::vm_area_struct {
+        self.vma.get()
+    }
+
+    /// Access the underlying `mm_struct`.
+    #[inline]
+    pub fn mm(&self) -> &MmWithUser {
+        // SAFETY: By the type invariants, this `vm_area_struct` is valid and we hold the mmap/vma
+        // read lock or stronger. This implies that the underlying mm has a non-zero value of
+        // `mm_users`.
+        unsafe { MmWithUser::from_raw((*self.as_ptr()).vm_mm) }
+    }
+
+    /// Returns the flags associated with the virtual memory area.
+    ///
+    /// The possible flags are a combination of the constants in [`flags`].
+    #[inline]
+    pub fn flags(&self) -> vm_flags_t {
+        // SAFETY: By the type invariants, the caller holds at least the mmap read lock, so this
+        // access is not a data race.
+        unsafe { (*self.as_ptr()).__bindgen_anon_2.vm_flags }
+    }
+
+    /// Returns the (inclusive) start address of the virtual memory area.
+    #[inline]
+    pub fn start(&self) -> usize {
+        // SAFETY: By the type invariants, the caller holds at least the mmap read lock, so this
+        // access is not a data race.
+        unsafe { (*self.as_ptr()).__bindgen_anon_1.__bindgen_anon_1.vm_start }
+    }
+
+    /// Returns the (exclusive) end address of the virtual memory area.
+    #[inline]
+    pub fn end(&self) -> usize {
+        // SAFETY: By the type invariants, the caller holds at least the mmap read lock, so this
+        // access is not a data race.
+        unsafe { (*self.as_ptr()).__bindgen_anon_1.__bindgen_anon_1.vm_end }
+    }
+
+    /// Zap pages in the given page range.
+    ///
+    /// This clears page table mappings for the range at the leaf level, leaving all other page
+    /// tables intact, and freeing any memory referenced by the VMA in this range. That is,
+    /// anonymous memory is completely freed, file-backed memory has its reference count on page
+    /// cache folio's dropped, any dirty data will still be written back to disk as usual.
+    ///
+    /// It may seem odd that we clear at the leaf level, this is however a product of the page
+    /// table structure used to map physical memory into a virtual address space - each virtual
+    /// address actually consists of a bitmap of array indices into page tables, which form a
+    /// hierarchical page table level structure.
+    ///
+    /// As a result, each page table level maps a multiple of page table levels below, and thus
+    /// span ever increasing ranges of pages. At the leaf or PTE level, we map the actual physical
+    /// memory.
+    ///
+    /// It is here where a zap operates, as it the only place we can be certain of clearing without
+    /// impacting any other virtual mappings. It is an implementation detail as to whether the
+    /// kernel goes further in freeing unused page tables, but for the purposes of this operation
+    /// we must only assume that the leaf level is cleared.
+    #[inline]
+    pub fn zap_page_range_single(&self, address: usize, size: usize) {
+        let (end, did_overflow) = address.overflowing_add(size);
+        if did_overflow || address < self.start() || self.end() < end {
+            // TODO: call WARN_ONCE once Rust version of it is added
+            return;
+        }
+
+        // SAFETY: By the type invariants, the caller has read access to this VMA, which is
+        // sufficient for this method call. This method has no requirements on the vma flags. The
+        // address range is checked to be within the vma.
+        unsafe {
+            bindings::zap_page_range_single(
+                self.as_ptr(),
+                address,
+                size,
+                core::ptr::null_mut(),
+            )
+        };
+    }
+}
+
+/// The integer type used for vma flags.
+#[doc(inline)]
+pub use bindings::vm_flags_t;
+
+/// All possible flags for [`VmAreaRef`].
+pub mod flags {
+    use super::vm_flags_t;
+    use crate::bindings;
+
+    /// No flags are set.
+    pub const NONE: vm_flags_t = bindings::VM_NONE as _;
+
+    /// Mapping allows reads.
+    pub const READ: vm_flags_t = bindings::VM_READ as _;
+
+    /// Mapping allows writes.
+    pub const WRITE: vm_flags_t = bindings::VM_WRITE as _;
+
+    /// Mapping allows execution.
+    pub const EXEC: vm_flags_t = bindings::VM_EXEC as _;
+
+    /// Mapping is shared.
+    pub const SHARED: vm_flags_t = bindings::VM_SHARED as _;
+
+    /// Mapping may be updated to allow reads.
+    pub const MAYREAD: vm_flags_t = bindings::VM_MAYREAD as _;
+
+    /// Mapping may be updated to allow writes.
+    pub const MAYWRITE: vm_flags_t = bindings::VM_MAYWRITE as _;
+
+    /// Mapping may be updated to allow execution.
+    pub const MAYEXEC: vm_flags_t = bindings::VM_MAYEXEC as _;
+
+    /// Mapping may be updated to be shared.
+    pub const MAYSHARE: vm_flags_t = bindings::VM_MAYSHARE as _;
+
+    /// Page-ranges managed without `struct page`, just pure PFN.
+    pub const PFNMAP: vm_flags_t = bindings::VM_PFNMAP as _;
+
+    /// Memory mapped I/O or similar.
+    pub const IO: vm_flags_t = bindings::VM_IO as _;
+
+    /// Do not copy this vma on fork.
+    pub const DONTCOPY: vm_flags_t = bindings::VM_DONTCOPY as _;
+
+    /// Cannot expand with mremap().
+    pub const DONTEXPAND: vm_flags_t = bindings::VM_DONTEXPAND as _;
+
+    /// Lock the pages covered when they are faulted in.
+    pub const LOCKONFAULT: vm_flags_t = bindings::VM_LOCKONFAULT as _;
+
+    /// Is a VM accounted object.
+    pub const ACCOUNT: vm_flags_t = bindings::VM_ACCOUNT as _;
+
+    /// Should the VM suppress accounting.
+    pub const NORESERVE: vm_flags_t = bindings::VM_NORESERVE as _;
+
+    /// Huge TLB Page VM.
+    pub const HUGETLB: vm_flags_t = bindings::VM_HUGETLB as _;
+
+    /// Synchronous page faults. (DAX-specific)
+    pub const SYNC: vm_flags_t = bindings::VM_SYNC as _;
+
+    /// Architecture-specific flag.
+    pub const ARCH_1: vm_flags_t = bindings::VM_ARCH_1 as _;
+
+    /// Wipe VMA contents in child on fork.
+    pub const WIPEONFORK: vm_flags_t = bindings::VM_WIPEONFORK as _;
+
+    /// Do not include in the core dump.
+    pub const DONTDUMP: vm_flags_t = bindings::VM_DONTDUMP as _;
+
+    /// Not soft dirty clean area.
+    pub const SOFTDIRTY: vm_flags_t = bindings::VM_SOFTDIRTY as _;
+
+    /// Can contain `struct page` and pure PFN pages.
+    pub const MIXEDMAP: vm_flags_t = bindings::VM_MIXEDMAP as _;
+
+    /// MADV_HUGEPAGE marked this vma.
+    pub const HUGEPAGE: vm_flags_t = bindings::VM_HUGEPAGE as _;
+
+    /// MADV_NOHUGEPAGE marked this vma.
+    pub const NOHUGEPAGE: vm_flags_t = bindings::VM_NOHUGEPAGE as _;
+
+    /// KSM may merge identical pages.
+    pub const MERGEABLE: vm_flags_t = bindings::VM_MERGEABLE as _;
+}

-- 
2.48.1.362.g079036d154-goog


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v13 3/8] mm: rust: add vm_insert_page
  2025-02-03 12:14 [PATCH v13 0/8] Rust support for mm_struct, vm_area_struct, and mmap Alice Ryhl
  2025-02-03 12:14 ` [PATCH v13 1/8] mm: rust: add abstraction for struct mm_struct Alice Ryhl
  2025-02-03 12:14 ` [PATCH v13 2/8] mm: rust: add vm_area_struct methods that require read access Alice Ryhl
@ 2025-02-03 12:14 ` Alice Ryhl
  2025-02-03 12:14 ` [PATCH v13 4/8] mm: rust: add lock_vma_under_rcu Alice Ryhl
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 23+ messages in thread
From: Alice Ryhl @ 2025-02-03 12:14 UTC (permalink / raw)
  To: Miguel Ojeda, Matthew Wilcox, Lorenzo Stoakes, Vlastimil Babka,
	John Hubbard, Liam R. Howlett, Andrew Morton, Greg Kroah-Hartman,
	Arnd Bergmann, Jann Horn, Suren Baghdasaryan
  Cc: Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Trevor Gross, linux-kernel,
	linux-mm, rust-for-linux, Alice Ryhl

The vm_insert_page method is only usable on vmas with the VM_MIXEDMAP
flag, so we introduce a new type to keep track of such vmas.

The approach used in this patch assumes that we will not need to encode
many flag combinations in the type. I don't think we need to encode more
than VM_MIXEDMAP and VM_PFNMAP as things are now. However, if that
becomes necessary, using generic parameters in a single type would scale
better as the number of flags increases.

Acked-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Andreas Hindborg <a.hindborg@kernel.org>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
 rust/kernel/mm/virt.rs | 79 +++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 78 insertions(+), 1 deletion(-)

diff --git a/rust/kernel/mm/virt.rs b/rust/kernel/mm/virt.rs
index dfe147cafdb3..64a0a47070a8 100644
--- a/rust/kernel/mm/virt.rs
+++ b/rust/kernel/mm/virt.rs
@@ -14,7 +14,15 @@
 //! ensures that you can't, for example, accidentally call a function that requires holding the
 //! write lock when you only hold the read lock.
 
-use crate::{bindings, mm::MmWithUser, types::Opaque};
+use crate::{
+    bindings,
+    error::{to_result, Result},
+    mm::MmWithUser,
+    page::Page,
+    types::Opaque,
+};
+
+use core::ops::Deref;
 
 /// A wrapper for the kernel's `struct vm_area_struct` with read access.
 ///
@@ -124,6 +132,75 @@ pub fn zap_page_range_single(&self, address: usize, size: usize) {
             )
         };
     }
+
+    /// If the [`VM_MIXEDMAP`] flag is set, returns a [`VmAreaMixedMap`] to this VMA, otherwise
+    /// returns `None`.
+    ///
+    /// This can be used to access methods that require [`VM_MIXEDMAP`] to be set.
+    ///
+    /// [`VM_MIXEDMAP`]: flags::MIXEDMAP
+    #[inline]
+    pub fn as_mixedmap_vma(&self) -> Option<&VmAreaMixedMap> {
+        if self.flags() & flags::MIXEDMAP != 0 {
+            // SAFETY: We just checked that `VM_MIXEDMAP` is set. All other requirements are
+            // satisfied by the type invariants of `VmAreaRef`.
+            Some(unsafe { VmAreaMixedMap::from_raw(self.as_ptr()) })
+        } else {
+            None
+        }
+    }
+}
+
+/// A wrapper for the kernel's `struct vm_area_struct` with read access and [`VM_MIXEDMAP`] set.
+///
+/// It represents an area of virtual memory.
+///
+/// This struct is identical to [`VmAreaRef`] except that it must only be used when the
+/// [`VM_MIXEDMAP`] flag is set on the vma.
+///
+/// # Invariants
+///
+/// The caller must hold the mmap read lock or the vma read lock. The `VM_MIXEDMAP` flag must be
+/// set.
+///
+/// [`VM_MIXEDMAP`]: flags::MIXEDMAP
+#[repr(transparent)]
+pub struct VmAreaMixedMap {
+    vma: VmAreaRef,
+}
+
+// Make all `VmAreaRef` methods available on `VmAreaMixedMap`.
+impl Deref for VmAreaMixedMap {
+    type Target = VmAreaRef;
+
+    #[inline]
+    fn deref(&self) -> &VmAreaRef {
+        &self.vma
+    }
+}
+
+impl VmAreaMixedMap {
+    /// Access a virtual memory area given a raw pointer.
+    ///
+    /// # Safety
+    ///
+    /// Callers must ensure that `vma` is valid for the duration of 'a, and that the mmap read lock
+    /// (or stronger) is held for at least the duration of 'a. The `VM_MIXEDMAP` flag must be set.
+    #[inline]
+    pub unsafe fn from_raw<'a>(vma: *const bindings::vm_area_struct) -> &'a Self {
+        // SAFETY: The caller ensures that the invariants are satisfied for the duration of 'a.
+        unsafe { &*vma.cast() }
+    }
+
+    /// Maps a single page at the given address within the virtual memory area.
+    ///
+    /// This operation does not take ownership of the page.
+    #[inline]
+    pub fn vm_insert_page(&self, address: usize, page: &Page) -> Result {
+        // SAFETY: By the type invariant of `Self` caller has read access and has verified that
+        // `VM_MIXEDMAP` is set. By invariant on `Page` the page has order 0.
+        to_result(unsafe { bindings::vm_insert_page(self.as_ptr(), address, page.as_ptr()) })
+    }
 }
 
 /// The integer type used for vma flags.

-- 
2.48.1.362.g079036d154-goog


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v13 4/8] mm: rust: add lock_vma_under_rcu
  2025-02-03 12:14 [PATCH v13 0/8] Rust support for mm_struct, vm_area_struct, and mmap Alice Ryhl
                   ` (2 preceding siblings ...)
  2025-02-03 12:14 ` [PATCH v13 3/8] mm: rust: add vm_insert_page Alice Ryhl
@ 2025-02-03 12:14 ` Alice Ryhl
  2025-02-03 12:14 ` [PATCH v13 5/8] mm: rust: add mmput_async support Alice Ryhl
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 23+ messages in thread
From: Alice Ryhl @ 2025-02-03 12:14 UTC (permalink / raw)
  To: Miguel Ojeda, Matthew Wilcox, Lorenzo Stoakes, Vlastimil Babka,
	John Hubbard, Liam R. Howlett, Andrew Morton, Greg Kroah-Hartman,
	Arnd Bergmann, Jann Horn, Suren Baghdasaryan
  Cc: Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Trevor Gross, linux-kernel,
	linux-mm, rust-for-linux, Alice Ryhl

Currently, the binder driver always uses the mmap lock to make changes
to its vma. Because the mmap lock is global to the process, this can
involve significant contention. However, the kernel has a feature called
per-vma locks, which can significantly reduce contention. For example,
you can take a vma lock in parallel with an mmap write lock. This is
important because contention on the mmap lock has been a long-term
recurring challenge for the Binder driver.

This patch introduces support for using `lock_vma_under_rcu` from Rust.
The Rust Binder driver will be able to use this to reduce contention on
the mmap lock.

Acked-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Jann Horn <jannh@google.com>
Reviewed-by: Andreas Hindborg <a.hindborg@kernel.org>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
 rust/helpers/mm.c |  5 +++++
 rust/kernel/mm.rs | 60 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 65 insertions(+)

diff --git a/rust/helpers/mm.c b/rust/helpers/mm.c
index 7b72eb065a3e..81b510c96fd2 100644
--- a/rust/helpers/mm.c
+++ b/rust/helpers/mm.c
@@ -43,3 +43,8 @@ struct vm_area_struct *rust_helper_vma_lookup(struct mm_struct *mm,
 {
 	return vma_lookup(mm, addr);
 }
+
+void rust_helper_vma_end_read(struct vm_area_struct *vma)
+{
+	vma_end_read(vma);
+}
diff --git a/rust/kernel/mm.rs b/rust/kernel/mm.rs
index bd6ff40f106f..88167c0cbd93 100644
--- a/rust/kernel/mm.rs
+++ b/rust/kernel/mm.rs
@@ -18,6 +18,7 @@
 use core::{ops::Deref, ptr::NonNull};
 
 pub mod virt;
+use virt::VmAreaRef;
 
 /// A wrapper for the kernel's `struct mm_struct`.
 ///
@@ -160,6 +161,36 @@ pub unsafe fn from_raw<'a>(ptr: *const bindings::mm_struct) -> &'a MmWithUser {
         unsafe { &*ptr.cast() }
     }
 
+    /// Attempt to access a vma using the vma read lock.
+    ///
+    /// This is an optimistic trylock operation, so it may fail if there is contention. In that
+    /// case, you should fall back to taking the mmap read lock.
+    ///
+    /// When per-vma locks are disabled, this always returns `None`.
+    #[inline]
+    pub fn lock_vma_under_rcu(&self, vma_addr: usize) -> Option<VmaReadGuard<'_>> {
+        #[cfg(CONFIG_PER_VMA_LOCK)]
+        {
+            // SAFETY: Calling `bindings::lock_vma_under_rcu` is always okay given an mm where
+            // `mm_users` is non-zero.
+            let vma = unsafe { bindings::lock_vma_under_rcu(self.as_raw(), vma_addr) };
+            if !vma.is_null() {
+                return Some(VmaReadGuard {
+                    // SAFETY: If `lock_vma_under_rcu` returns a non-null ptr, then it points at a
+                    // valid vma. The vma is stable for as long as the vma read lock is held.
+                    vma: unsafe { VmAreaRef::from_raw(vma) },
+                    _nts: NotThreadSafe,
+                });
+            }
+        }
+
+        // Silence warnings about unused variables.
+        #[cfg(not(CONFIG_PER_VMA_LOCK))]
+        let _ = vma_addr;
+
+        None
+    }
+
     /// Lock the mmap read lock.
     #[inline]
     pub fn mmap_read_lock(&self) -> MmapReadGuard<'_> {
@@ -228,3 +259,32 @@ fn drop(&mut self) {
         unsafe { bindings::mmap_read_unlock(self.mm.as_raw()) };
     }
 }
+
+/// A guard for the vma read lock.
+///
+/// # Invariants
+///
+/// This `VmaReadGuard` guard owns the vma read lock.
+pub struct VmaReadGuard<'a> {
+    vma: &'a VmAreaRef,
+    // `vma_end_read` must be called on the same thread as where the lock was taken
+    _nts: NotThreadSafe,
+}
+
+// Make all `VmAreaRef` methods available on `VmaReadGuard`.
+impl Deref for VmaReadGuard<'_> {
+    type Target = VmAreaRef;
+
+    #[inline]
+    fn deref(&self) -> &VmAreaRef {
+        self.vma
+    }
+}
+
+impl Drop for VmaReadGuard<'_> {
+    #[inline]
+    fn drop(&mut self) {
+        // SAFETY: We hold the read lock by the type invariants.
+        unsafe { bindings::vma_end_read(self.vma.as_ptr()) };
+    }
+}

-- 
2.48.1.362.g079036d154-goog


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v13 5/8] mm: rust: add mmput_async support
  2025-02-03 12:14 [PATCH v13 0/8] Rust support for mm_struct, vm_area_struct, and mmap Alice Ryhl
                   ` (3 preceding siblings ...)
  2025-02-03 12:14 ` [PATCH v13 4/8] mm: rust: add lock_vma_under_rcu Alice Ryhl
@ 2025-02-03 12:14 ` Alice Ryhl
  2025-02-03 12:14 ` [PATCH v13 6/8] mm: rust: add VmAreaNew for f_ops->mmap() Alice Ryhl
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 23+ messages in thread
From: Alice Ryhl @ 2025-02-03 12:14 UTC (permalink / raw)
  To: Miguel Ojeda, Matthew Wilcox, Lorenzo Stoakes, Vlastimil Babka,
	John Hubbard, Liam R. Howlett, Andrew Morton, Greg Kroah-Hartman,
	Arnd Bergmann, Jann Horn, Suren Baghdasaryan
  Cc: Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Trevor Gross, linux-kernel,
	linux-mm, rust-for-linux, Alice Ryhl

Adds an MmWithUserAsync type that uses mmput_async when dropped but is
otherwise identical to MmWithUser. This has to be done using a separate
type because the thing we are changing is the destructor.

Rust Binder needs this to avoid a certain deadlock. See commit
9a9ab0d96362 ("binder: fix race between mmput() and do_exit()") for
details. It's also needed in the shrinker to avoid cleaning up the mm in
the shrinker's context.

Reviewed-by: Andreas Hindborg <a.hindborg@kernel.org>
Acked-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> (for mm bits)
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
 rust/kernel/mm.rs | 49 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 49 insertions(+)

diff --git a/rust/kernel/mm.rs b/rust/kernel/mm.rs
index 88167c0cbd93..1c43e51c9cca 100644
--- a/rust/kernel/mm.rs
+++ b/rust/kernel/mm.rs
@@ -110,6 +110,48 @@ fn deref(&self) -> &Mm {
     }
 }
 
+/// A wrapper for the kernel's `struct mm_struct`.
+///
+/// This type is identical to `MmWithUser` except that it uses `mmput_async` when dropping a
+/// refcount. This means that the destructor of `ARef<MmWithUserAsync>` is safe to call in atomic
+/// context.
+///
+/// # Invariants
+///
+/// Values of this type are always refcounted using `mmget`. The value of `mm_users` is non-zero.
+#[repr(transparent)]
+pub struct MmWithUserAsync {
+    mm: MmWithUser,
+}
+
+// SAFETY: It is safe to call `mmput_async` on another thread than where `mmget` was called.
+unsafe impl Send for MmWithUserAsync {}
+// SAFETY: All methods on `MmWithUserAsync` can be called in parallel from several threads.
+unsafe impl Sync for MmWithUserAsync {}
+
+// SAFETY: By the type invariants, this type is always refcounted.
+unsafe impl AlwaysRefCounted for MmWithUserAsync {
+    fn inc_ref(&self) {
+        // SAFETY: The pointer is valid since self is a reference.
+        unsafe { bindings::mmget(self.as_raw()) };
+    }
+
+    unsafe fn dec_ref(obj: NonNull<Self>) {
+        // SAFETY: The caller is giving up their refcount.
+        unsafe { bindings::mmput_async(obj.cast().as_ptr()) };
+    }
+}
+
+// Make all `MmWithUser` methods available on `MmWithUserAsync`.
+impl Deref for MmWithUserAsync {
+    type Target = MmWithUser;
+
+    #[inline]
+    fn deref(&self) -> &MmWithUser {
+        &self.mm
+    }
+}
+
 // These methods are safe to call even if `mm_users` is zero.
 impl Mm {
     /// Returns a raw pointer to the inner `mm_struct`.
@@ -161,6 +203,13 @@ pub unsafe fn from_raw<'a>(ptr: *const bindings::mm_struct) -> &'a MmWithUser {
         unsafe { &*ptr.cast() }
     }
 
+    /// Use `mmput_async` when dropping this refcount.
+    #[inline]
+    pub fn into_mmput_async(me: ARef<MmWithUser>) -> ARef<MmWithUserAsync> {
+        // SAFETY: The layouts and invariants are compatible.
+        unsafe { ARef::from_raw(ARef::into_raw(me).cast()) }
+    }
+
     /// Attempt to access a vma using the vma read lock.
     ///
     /// This is an optimistic trylock operation, so it may fail if there is contention. In that

-- 
2.48.1.362.g079036d154-goog


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v13 6/8] mm: rust: add VmAreaNew for f_ops->mmap()
  2025-02-03 12:14 [PATCH v13 0/8] Rust support for mm_struct, vm_area_struct, and mmap Alice Ryhl
                   ` (4 preceding siblings ...)
  2025-02-03 12:14 ` [PATCH v13 5/8] mm: rust: add mmput_async support Alice Ryhl
@ 2025-02-03 12:14 ` Alice Ryhl
  2025-02-03 12:14 ` [PATCH v13 7/8] rust: miscdevice: add mmap support Alice Ryhl
  2025-02-03 12:14 ` [PATCH v13 8/8] task: rust: rework how current is accessed Alice Ryhl
  7 siblings, 0 replies; 23+ messages in thread
From: Alice Ryhl @ 2025-02-03 12:14 UTC (permalink / raw)
  To: Miguel Ojeda, Matthew Wilcox, Lorenzo Stoakes, Vlastimil Babka,
	John Hubbard, Liam R. Howlett, Andrew Morton, Greg Kroah-Hartman,
	Arnd Bergmann, Jann Horn, Suren Baghdasaryan
  Cc: Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Trevor Gross, linux-kernel,
	linux-mm, rust-for-linux, Alice Ryhl

This type will be used when setting up a new vma in an f_ops->mmap()
hook. Using a separate type from VmAreaRef allows us to have a separate
set of operations that you are only able to use during the mmap() hook.
For example, the VM_MIXEDMAP flag must not be changed after the initial
setup that happens during the f_ops->mmap() hook.

To avoid setting invalid flag values, the methods for clearing
VM_MAYWRITE and similar involve a check of VM_WRITE, and return an error
if VM_WRITE is set. Trying to use `try_clear_maywrite` without checking
the return value results in a compilation error because the `Result`
type is marked #[must_use].

For now, there's only a method for VM_MIXEDMAP and not VM_PFNMAP. When
we add a VM_PFNMAP method, we will need some way to prevent you from
setting both VM_MIXEDMAP and VM_PFNMAP on the same vma.

Acked-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Jann Horn <jannh@google.com>
Reviewed-by: Andreas Hindborg <a.hindborg@kernel.org>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
 rust/kernel/mm/virt.rs | 186 ++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 185 insertions(+), 1 deletion(-)

diff --git a/rust/kernel/mm/virt.rs b/rust/kernel/mm/virt.rs
index 64a0a47070a8..5847fe73db17 100644
--- a/rust/kernel/mm/virt.rs
+++ b/rust/kernel/mm/virt.rs
@@ -16,7 +16,7 @@
 
 use crate::{
     bindings,
-    error::{to_result, Result},
+    error::{code::EINVAL, to_result, Result},
     mm::MmWithUser,
     page::Page,
     types::Opaque,
@@ -203,6 +203,190 @@ pub fn vm_insert_page(&self, address: usize, page: &Page) -> Result {
     }
 }
 
+/// A configuration object for setting up a VMA in an `f_ops->mmap()` hook.
+///
+/// The `f_ops->mmap()` hook is called when a new VMA is being created, and the hook is able to
+/// configure the VMA in various ways to fit the driver that owns it. Using `VmAreaNew` indicates
+/// that you are allowed to perform operations on the VMA that can only be performed before the VMA
+/// is fully initialized.
+///
+/// # Invariants
+///
+/// For the duration of 'a, the referenced vma must be undergoing initialization in an
+/// `f_ops->mmap()` hook.
+pub struct VmAreaNew {
+    vma: VmAreaRef,
+}
+
+// Make all `VmAreaRef` methods available on `VmAreaNew`.
+impl Deref for VmAreaNew {
+    type Target = VmAreaRef;
+
+    #[inline]
+    fn deref(&self) -> &VmAreaRef {
+        &self.vma
+    }
+}
+
+impl VmAreaNew {
+    /// Access a virtual memory area given a raw pointer.
+    ///
+    /// # Safety
+    ///
+    /// Callers must ensure that `vma` is undergoing initial vma setup for the duration of 'a.
+    #[inline]
+    pub unsafe fn from_raw<'a>(vma: *mut bindings::vm_area_struct) -> &'a Self {
+        // SAFETY: The caller ensures that the invariants are satisfied for the duration of 'a.
+        unsafe { &*vma.cast() }
+    }
+
+    /// Internal method for updating the vma flags.
+    ///
+    /// # Safety
+    ///
+    /// This must not be used to set the flags to an invalid value.
+    #[inline]
+    unsafe fn update_flags(&self, set: vm_flags_t, unset: vm_flags_t) {
+        let mut flags = self.flags();
+        flags |= set;
+        flags &= !unset;
+
+        // SAFETY: This is not a data race: the vma is undergoing initial setup, so it's not yet
+        // shared. Additionally, `VmAreaNew` is `!Sync`, so it cannot be used to write in parallel.
+        // The caller promises that this does not set the flags to an invalid value.
+        unsafe { (*self.as_ptr()).__bindgen_anon_2.__vm_flags = flags };
+    }
+
+    /// Set the `VM_MIXEDMAP` flag on this vma.
+    ///
+    /// This enables the vma to contain both `struct page` and pure PFN pages. Returns a reference
+    /// that can be used to call `vm_insert_page` on the vma.
+    #[inline]
+    pub fn set_mixedmap(&self) -> &VmAreaMixedMap {
+        // SAFETY: We don't yet provide a way to set VM_PFNMAP, so this cannot put the flags in an
+        // invalid state.
+        unsafe { self.update_flags(flags::MIXEDMAP, 0) };
+
+        // SAFETY: We just set `VM_MIXEDMAP` on the vma.
+        unsafe { VmAreaMixedMap::from_raw(self.vma.as_ptr()) }
+    }
+
+    /// Set the `VM_IO` flag on this vma.
+    ///
+    /// This is used for memory mapped IO and similar. The flag tells other parts of the kernel to
+    /// avoid looking at the pages. For memory mapped IO this is useful as accesses to the pages
+    /// could have side effects.
+    #[inline]
+    pub fn set_io(&self) {
+        // SAFETY: Setting the VM_IO flag is always okay.
+        unsafe { self.update_flags(flags::IO, 0) };
+    }
+
+    /// Set the `VM_DONTEXPAND` flag on this vma.
+    ///
+    /// This prevents the vma from being expanded with `mremap()`.
+    #[inline]
+    pub fn set_dontexpand(&self) {
+        // SAFETY: Setting the VM_DONTEXPAND flag is always okay.
+        unsafe { self.update_flags(flags::DONTEXPAND, 0) };
+    }
+
+    /// Set the `VM_DONTCOPY` flag on this vma.
+    ///
+    /// This prevents the vma from being copied on fork. This option is only permanent if `VM_IO`
+    /// is set.
+    #[inline]
+    pub fn set_dontcopy(&self) {
+        // SAFETY: Setting the VM_DONTCOPY flag is always okay.
+        unsafe { self.update_flags(flags::DONTCOPY, 0) };
+    }
+
+    /// Set the `VM_DONTDUMP` flag on this vma.
+    ///
+    /// This prevents the vma from being included in core dumps. This option is only permanent if
+    /// `VM_IO` is set.
+    #[inline]
+    pub fn set_dontdump(&self) {
+        // SAFETY: Setting the VM_DONTDUMP flag is always okay.
+        unsafe { self.update_flags(flags::DONTDUMP, 0) };
+    }
+
+    /// Returns whether `VM_READ` is set.
+    ///
+    /// This flag indicates whether userspace is mapping this vma as readable.
+    #[inline]
+    pub fn readable(&self) -> bool {
+        (self.flags() & flags::READ) != 0
+    }
+
+    /// Try to clear the `VM_MAYREAD` flag, failing if `VM_READ` is set.
+    ///
+    /// This flag indicates whether userspace is allowed to make this vma readable with
+    /// `mprotect()`.
+    ///
+    /// Note that this operation is irreversible. Once `VM_MAYREAD` has been cleared, it can never
+    /// be set again.
+    #[inline]
+    pub fn try_clear_mayread(&self) -> Result {
+        if self.readable() {
+            return Err(EINVAL);
+        }
+        // SAFETY: Clearing `VM_MAYREAD` is okay when `VM_READ` is not set.
+        unsafe { self.update_flags(0, flags::MAYREAD) };
+        Ok(())
+    }
+
+    /// Returns whether `VM_WRITE` is set.
+    ///
+    /// This flag indicates whether userspace is mapping this vma as writable.
+    #[inline]
+    pub fn writable(&self) -> bool {
+        (self.flags() & flags::WRITE) != 0
+    }
+
+    /// Try to clear the `VM_MAYWRITE` flag, failing if `VM_WRITE` is set.
+    ///
+    /// This flag indicates whether userspace is allowed to make this vma writable with
+    /// `mprotect()`.
+    ///
+    /// Note that this operation is irreversible. Once `VM_MAYWRITE` has been cleared, it can never
+    /// be set again.
+    #[inline]
+    pub fn try_clear_maywrite(&self) -> Result {
+        if self.writable() {
+            return Err(EINVAL);
+        }
+        // SAFETY: Clearing `VM_MAYWRITE` is okay when `VM_WRITE` is not set.
+        unsafe { self.update_flags(0, flags::MAYWRITE) };
+        Ok(())
+    }
+
+    /// Returns whether `VM_EXEC` is set.
+    ///
+    /// This flag indicates whether userspace is mapping this vma as executable.
+    #[inline]
+    pub fn executable(&self) -> bool {
+        (self.flags() & flags::EXEC) != 0
+    }
+
+    /// Try to clear the `VM_MAYEXEC` flag, failing if `VM_EXEC` is set.
+    ///
+    /// This flag indicates whether userspace is allowed to make this vma executable with
+    /// `mprotect()`.
+    ///
+    /// Note that this operation is irreversible. Once `VM_MAYEXEC` has been cleared, it can never
+    /// be set again.
+    #[inline]
+    pub fn try_clear_mayexec(&self) -> Result {
+        if self.executable() {
+            return Err(EINVAL);
+        }
+        // SAFETY: Clearing `VM_MAYEXEC` is okay when `VM_EXEC` is not set.
+        unsafe { self.update_flags(0, flags::MAYEXEC) };
+        Ok(())
+    }
+}
+
 /// The integer type used for vma flags.
 #[doc(inline)]
 pub use bindings::vm_flags_t;

-- 
2.48.1.362.g079036d154-goog


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v13 7/8] rust: miscdevice: add mmap support
  2025-02-03 12:14 [PATCH v13 0/8] Rust support for mm_struct, vm_area_struct, and mmap Alice Ryhl
                   ` (5 preceding siblings ...)
  2025-02-03 12:14 ` [PATCH v13 6/8] mm: rust: add VmAreaNew for f_ops->mmap() Alice Ryhl
@ 2025-02-03 12:14 ` Alice Ryhl
  2025-02-03 12:14 ` [PATCH v13 8/8] task: rust: rework how current is accessed Alice Ryhl
  7 siblings, 0 replies; 23+ messages in thread
From: Alice Ryhl @ 2025-02-03 12:14 UTC (permalink / raw)
  To: Miguel Ojeda, Matthew Wilcox, Lorenzo Stoakes, Vlastimil Babka,
	John Hubbard, Liam R. Howlett, Andrew Morton, Greg Kroah-Hartman,
	Arnd Bergmann, Jann Horn, Suren Baghdasaryan
  Cc: Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Trevor Gross, linux-kernel,
	linux-mm, rust-for-linux, Alice Ryhl

Add the ability to write a file_operations->mmap hook in Rust when using
the miscdevice abstraction. The `vma` argument to the `mmap` hook uses
the `VmAreaNew` type from the previous commit; this type provides the
correct set of operations for a file_operations->mmap hook.

Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Acked-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Andreas Hindborg <a.hindborg@kernel.org>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
 rust/kernel/miscdevice.rs | 44 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 44 insertions(+)

diff --git a/rust/kernel/miscdevice.rs b/rust/kernel/miscdevice.rs
index e14433b2ab9d..97b2384f0816 100644
--- a/rust/kernel/miscdevice.rs
+++ b/rust/kernel/miscdevice.rs
@@ -14,6 +14,7 @@
     error::{to_result, Error, Result, VTABLE_DEFAULT_ERROR},
     ffi::{c_int, c_long, c_uint, c_ulong},
     fs::File,
+    mm::virt::VmAreaNew,
     prelude::*,
     seq_file::SeqFile,
     str::CStr,
@@ -119,6 +120,22 @@ fn release(device: Self::Ptr, _file: &File) {
         drop(device);
     }
 
+    /// Handle for mmap.
+    ///
+    /// This function is invoked when a user space process invokes the `mmap` system call on
+    /// `file`. The function is a callback that is part of the VMA initializer. The kernel will do
+    /// initial setup of the VMA before calling this function. The function can then interact with
+    /// the VMA initialization by calling methods of `vma`. If the function does not return an
+    /// error, the kernel will complete initialization of the VMA according to the properties of
+    /// `vma`.
+    fn mmap(
+        _device: <Self::Ptr as ForeignOwnable>::Borrowed<'_>,
+        _file: &File,
+        _vma: &VmAreaNew,
+    ) -> Result {
+        kernel::build_error!(VTABLE_DEFAULT_ERROR)
+    }
+
     /// Handler for ioctls.
     ///
     /// The `cmd` argument is usually manipulated using the utilties in [`kernel::ioctl`].
@@ -176,6 +193,7 @@ impl<T: MiscDevice> VtableHelper<T> {
         const VTABLE: bindings::file_operations = bindings::file_operations {
             open: Some(fops_open::<T>),
             release: Some(fops_release::<T>),
+            mmap: maybe_fn(T::HAS_MMAP, fops_mmap::<T>),
             unlocked_ioctl: maybe_fn(T::HAS_IOCTL, fops_ioctl::<T>),
             #[cfg(CONFIG_COMPAT)]
             compat_ioctl: if T::HAS_COMPAT_IOCTL {
@@ -257,6 +275,32 @@ impl<T: MiscDevice> VtableHelper<T> {
     0
 }
 
+/// # Safety
+///
+/// `file` must be a valid file that is associated with a `MiscDeviceRegistration<T>`.
+/// `vma` must be a vma that is currently being mmap'ed with this file.
+unsafe extern "C" fn fops_mmap<T: MiscDevice>(
+    file: *mut bindings::file,
+    vma: *mut bindings::vm_area_struct,
+) -> c_int {
+    // SAFETY: The mmap call of a file can access the private data.
+    let private = unsafe { (*file).private_data };
+    // SAFETY: This is a Rust Miscdevice, so we call `into_foreign` in `open` and `from_foreign` in
+    // `release`, and `fops_mmap` is guaranteed to be called between those two operations.
+    let device = unsafe { <T::Ptr as ForeignOwnable>::borrow(private) };
+    // SAFETY: The caller provides a vma that is undergoing initial VMA setup.
+    let area = unsafe { VmAreaNew::from_raw(vma) };
+    // SAFETY:
+    // * The file is valid for the duration of this call.
+    // * There is no active fdget_pos region on the file on this thread.
+    let file = unsafe { File::from_raw_file(file) };
+
+    match T::mmap(device, file, area) {
+        Ok(()) => 0,
+        Err(err) => err.to_errno(),
+    }
+}
+
 /// # Safety
 ///
 /// `file` must be a valid file that is associated with a `MiscDeviceRegistration<T>`.

-- 
2.48.1.362.g079036d154-goog


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v13 8/8] task: rust: rework how current is accessed
  2025-02-03 12:14 [PATCH v13 0/8] Rust support for mm_struct, vm_area_struct, and mmap Alice Ryhl
                   ` (6 preceding siblings ...)
  2025-02-03 12:14 ` [PATCH v13 7/8] rust: miscdevice: add mmap support Alice Ryhl
@ 2025-02-03 12:14 ` Alice Ryhl
  7 siblings, 0 replies; 23+ messages in thread
From: Alice Ryhl @ 2025-02-03 12:14 UTC (permalink / raw)
  To: Miguel Ojeda, Matthew Wilcox, Lorenzo Stoakes, Vlastimil Babka,
	John Hubbard, Liam R. Howlett, Andrew Morton, Greg Kroah-Hartman,
	Arnd Bergmann, Jann Horn, Suren Baghdasaryan
  Cc: Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Trevor Gross, linux-kernel,
	linux-mm, rust-for-linux, Alice Ryhl

Introduce a new type called `CurrentTask` that lets you perform various
operations that are only safe on the `current` task. Use the new type to
provide a way to access the current mm without incrementing its
refcount.

With this change, you can write stuff such as

	let vma = current!().mm().lock_vma_under_rcu(addr);

without incrementing any refcounts.

This replaces the existing abstractions for accessing the current pid
namespace. With the old approach, every field access to current involves
both a macro and a unsafe helper function. The new approach simplifies
that to a single safe function on the `CurrentTask` type. This makes it
less heavy-weight to add additional current accessors in the future.

That said, creating a `CurrentTask` type like the one in this patch
requires that we are careful to ensure that it cannot escape the current
task or otherwise access things after they are freed. To do this, I
declared that it cannot escape the current "task context" where I
defined a "task context" as essentially the region in which `current`
remains unchanged. So e.g., release_task() or begin_new_exec() would
leave the task context.

If a userspace thread returns to userspace and later makes another
syscall, then I consider the two syscalls to be different task contexts.
This allows values stored in that task to be modified between syscalls,
even if they're guaranteed to be immutable during a syscall.

Ensuring correctness of `CurrentTask` is slightly tricky if we also want
the ability to have a safe `kthread_use_mm()` implementation in Rust. To
support that safely, there are two patterns we need to ensure are safe:

	// Case 1: current!() called inside the scope.
	let mm;
	kthread_use_mm(some_mm, || {
	    mm = current!().mm();
	});
	drop(some_mm);
	mm.do_something(); // UAF

and:

	// Case 2: current!() called before the scope.
	let mm;
	let task = current!();
	kthread_use_mm(some_mm, || {
	    mm = task.mm();
	});
	drop(some_mm);
	mm.do_something(); // UAF

The existing `current!()` abstraction already natively prevents the
first case: The `&CurrentTask` would be tied to the inner scope, so the
borrow-checker ensures that no reference derived from it can escape the
scope.

Fixing the second case is a bit more tricky. The solution is to
essentially pretend that the contents of the scope execute on an
different thread, which means that only thread-safe types can cross the
boundary. Since `CurrentTask` is marked `NotThreadSafe`, attempts to
move it to another thread will fail, and this includes our fake pretend
thread boundary.

This has the disadvantage that other types that aren't thread-safe for
reasons unrelated to `current` also cannot be moved across the
`kthread_use_mm()` boundary. I consider this an acceptable tradeoff.

Reviewed-by: Boqun Feng <boqun.feng@gmail.com>
Reviewed-by: Andreas Hindborg <a.hindborg@kernel.org>
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
 rust/kernel/task.rs | 247 +++++++++++++++++++++++++++-------------------------
 1 file changed, 129 insertions(+), 118 deletions(-)

diff --git a/rust/kernel/task.rs b/rust/kernel/task.rs
index 07bc22a7645c..0b6cb9a83a2e 100644
--- a/rust/kernel/task.rs
+++ b/rust/kernel/task.rs
@@ -7,6 +7,7 @@
 use crate::{
     bindings,
     ffi::{c_int, c_long, c_uint},
+    mm::MmWithUser,
     pid_namespace::PidNamespace,
     types::{ARef, NotThreadSafe, Opaque},
 };
@@ -31,22 +32,20 @@
 #[macro_export]
 macro_rules! current {
     () => {
-        // SAFETY: Deref + addr-of below create a temporary `TaskRef` that cannot outlive the
-        // caller.
+        // SAFETY: This expression creates a temporary value that is dropped at the end of the
+        // caller's scope. The following mechanisms ensure that the resulting `&CurrentTask` cannot
+        // leave current task context:
+        //
+        // * To return to userspace, the caller must leave the current scope.
+        // * Operations such as `begin_new_exec()` are necessarily unsafe and the caller of
+        //   `begin_new_exec()` is responsible for safety.
+        // * Rust abstractions for things such as a `kthread_use_mm()` scope must require the
+        //   closure to be `Send`, so the `NotThreadSafe` field of `CurrentTask` ensures that the
+        //   `&CurrentTask` cannot cross the scope in either direction.
         unsafe { &*$crate::task::Task::current() }
     };
 }
 
-/// Returns the currently running task's pid namespace.
-#[macro_export]
-macro_rules! current_pid_ns {
-    () => {
-        // SAFETY: Deref + addr-of below create a temporary `PidNamespaceRef` that cannot outlive
-        // the caller.
-        unsafe { &*$crate::task::Task::current_pid_ns() }
-    };
-}
-
 /// Wraps the kernel's `struct task_struct`.
 ///
 /// # Invariants
@@ -85,7 +84,7 @@ macro_rules! current_pid_ns {
 /// impl State {
 ///     fn new() -> Self {
 ///         Self {
-///             creator: current!().into(),
+///             creator: ARef::from(&**current!()),
 ///             index: 0,
 ///         }
 ///     }
@@ -105,6 +104,44 @@ unsafe impl Send for Task {}
 // synchronised by C code (e.g., `signal_pending`).
 unsafe impl Sync for Task {}
 
+/// Represents the [`Task`] in the `current` global.
+///
+/// This type exists to provide more efficient operations that are only valid on the current task.
+/// For example, to retrieve the pid-namespace of a task, you must use rcu protection unless it is
+/// the current task.
+///
+/// # Invariants
+///
+/// Each value of this type must only be accessed from the task context it was created within.
+///
+/// Of course, every thread is in a different task context, but for the purposes of this invariant,
+/// these operations also permanently leave the task context:
+///
+/// * Returning to userspace from system call context.
+/// * Calling `release_task()`.
+/// * Calling `begin_new_exec()` in a binary format loader.
+///
+/// Other operations temporarily create a new sub-context:
+///
+/// * Calling `kthread_use_mm()` creates a new context, and `kthread_unuse_mm()` returns to the
+///   old context.
+///
+/// This means that a `CurrentTask` obtained before a `kthread_use_mm()` call may be used again
+/// once `kthread_unuse_mm()` is called, but it must not be used between these two calls.
+/// Conversely, a `CurrentTask` obtained between a `kthread_use_mm()`/`kthread_unuse_mm()` pair
+/// must not be used after `kthread_unuse_mm()`.
+#[repr(transparent)]
+pub struct CurrentTask(Task, NotThreadSafe);
+
+// Make all `Task` methods available on `CurrentTask`.
+impl Deref for CurrentTask {
+    type Target = Task;
+    #[inline]
+    fn deref(&self) -> &Task {
+        &self.0
+    }
+}
+
 /// The type of process identifiers (PIDs).
 type Pid = bindings::pid_t;
 
@@ -131,119 +168,29 @@ pub fn current_raw() -> *mut bindings::task_struct {
     ///
     /// # Safety
     ///
-    /// Callers must ensure that the returned object doesn't outlive the current task/thread.
-    pub unsafe fn current() -> impl Deref<Target = Task> {
-        struct TaskRef<'a> {
-            task: &'a Task,
-            _not_send: NotThreadSafe,
+    /// Callers must ensure that the returned object is only used to access a [`CurrentTask`]
+    /// within the task context that was active when this function was called. For more details,
+    /// see the invariants section for [`CurrentTask`].
+    pub unsafe fn current() -> impl Deref<Target = CurrentTask> {
+        struct TaskRef {
+            task: *const CurrentTask,
         }
 
-        impl Deref for TaskRef<'_> {
-            type Target = Task;
+        impl Deref for TaskRef {
+            type Target = CurrentTask;
 
             fn deref(&self) -> &Self::Target {
-                self.task
+                // SAFETY: The returned reference borrows from this `TaskRef`, so it cannot outlive
+                // the `TaskRef`, which the caller of `Task::current()` has promised will not
+                // outlive the task/thread for which `self.task` is the `current` pointer. Thus, it
+                // is okay to return a `CurrentTask` reference here.
+                unsafe { &*self.task }
             }
         }
 
-        let current = Task::current_raw();
         TaskRef {
-            // SAFETY: If the current thread is still running, the current task is valid. Given
-            // that `TaskRef` is not `Send`, we know it cannot be transferred to another thread
-            // (where it could potentially outlive the caller).
-            task: unsafe { &*current.cast() },
-            _not_send: NotThreadSafe,
-        }
-    }
-
-    /// Returns a PidNamespace reference for the currently executing task's/thread's pid namespace.
-    ///
-    /// This function can be used to create an unbounded lifetime by e.g., storing the returned
-    /// PidNamespace in a global variable which would be a bug. So the recommended way to get the
-    /// current task's/thread's pid namespace is to use the [`current_pid_ns`] macro because it is
-    /// safe.
-    ///
-    /// # Safety
-    ///
-    /// Callers must ensure that the returned object doesn't outlive the current task/thread.
-    pub unsafe fn current_pid_ns() -> impl Deref<Target = PidNamespace> {
-        struct PidNamespaceRef<'a> {
-            task: &'a PidNamespace,
-            _not_send: NotThreadSafe,
-        }
-
-        impl Deref for PidNamespaceRef<'_> {
-            type Target = PidNamespace;
-
-            fn deref(&self) -> &Self::Target {
-                self.task
-            }
-        }
-
-        // The lifetime of `PidNamespace` is bound to `Task` and `struct pid`.
-        //
-        // The `PidNamespace` of a `Task` doesn't ever change once the `Task` is alive. A
-        // `unshare(CLONE_NEWPID)` or `setns(fd_pidns/pidfd, CLONE_NEWPID)` will not have an effect
-        // on the calling `Task`'s pid namespace. It will only effect the pid namespace of children
-        // created by the calling `Task`. This invariant guarantees that after having acquired a
-        // reference to a `Task`'s pid namespace it will remain unchanged.
-        //
-        // When a task has exited and been reaped `release_task()` will be called. This will set
-        // the `PidNamespace` of the task to `NULL`. So retrieving the `PidNamespace` of a task
-        // that is dead will return `NULL`. Note, that neither holding the RCU lock nor holding a
-        // referencing count to
-        // the `Task` will prevent `release_task()` being called.
-        //
-        // In order to retrieve the `PidNamespace` of a `Task` the `task_active_pid_ns()` function
-        // can be used. There are two cases to consider:
-        //
-        // (1) retrieving the `PidNamespace` of the `current` task
-        // (2) retrieving the `PidNamespace` of a non-`current` task
-        //
-        // From system call context retrieving the `PidNamespace` for case (1) is always safe and
-        // requires neither RCU locking nor a reference count to be held. Retrieving the
-        // `PidNamespace` after `release_task()` for current will return `NULL` but no codepath
-        // like that is exposed to Rust.
-        //
-        // Retrieving the `PidNamespace` from system call context for (2) requires RCU protection.
-        // Accessing `PidNamespace` outside of RCU protection requires a reference count that
-        // must've been acquired while holding the RCU lock. Note that accessing a non-`current`
-        // task means `NULL` can be returned as the non-`current` task could have already passed
-        // through `release_task()`.
-        //
-        // To retrieve (1) the `current_pid_ns!()` macro should be used which ensure that the
-        // returned `PidNamespace` cannot outlive the calling scope. The associated
-        // `current_pid_ns()` function should not be called directly as it could be abused to
-        // created an unbounded lifetime for `PidNamespace`. The `current_pid_ns!()` macro allows
-        // Rust to handle the common case of accessing `current`'s `PidNamespace` without RCU
-        // protection and without having to acquire a reference count.
-        //
-        // For (2) the `task_get_pid_ns()` method must be used. This will always acquire a
-        // reference on `PidNamespace` and will return an `Option` to force the caller to
-        // explicitly handle the case where `PidNamespace` is `None`, something that tends to be
-        // forgotten when doing the equivalent operation in `C`. Missing RCU primitives make it
-        // difficult to perform operations that are otherwise safe without holding a reference
-        // count as long as RCU protection is guaranteed. But it is not important currently. But we
-        // do want it in the future.
-        //
-        // Note for (2) the required RCU protection around calling `task_active_pid_ns()`
-        // synchronizes against putting the last reference of the associated `struct pid` of
-        // `task->thread_pid`. The `struct pid` stored in that field is used to retrieve the
-        // `PidNamespace` of the caller. When `release_task()` is called `task->thread_pid` will be
-        // `NULL`ed and `put_pid()` on said `struct pid` will be delayed in `free_pid()` via
-        // `call_rcu()` allowing everyone with an RCU protected access to the `struct pid` acquired
-        // from `task->thread_pid` to finish.
-        //
-        // SAFETY: The current task's pid namespace is valid as long as the current task is running.
-        let pidns = unsafe { bindings::task_active_pid_ns(Task::current_raw()) };
-        PidNamespaceRef {
-            // SAFETY: If the current thread is still running, the current task and its associated
-            // pid namespace are valid. `PidNamespaceRef` is not `Send`, so we know it cannot be
-            // transferred to another thread (where it could potentially outlive the current
-            // `Task`). The caller needs to ensure that the PidNamespaceRef doesn't outlive the
-            // current task/thread.
-            task: unsafe { PidNamespace::from_ptr(pidns) },
-            _not_send: NotThreadSafe,
+            // CAST: The layout of `struct task_struct` and `CurrentTask` is identical.
+            task: Task::current_raw().cast(),
         }
     }
 
@@ -326,6 +273,70 @@ pub fn wake_up(&self) {
     }
 }
 
+impl CurrentTask {
+    /// Access the address space of the current task.
+    ///
+    /// This function does not touch the refcount of the mm.
+    #[inline]
+    pub fn mm(&self) -> Option<&MmWithUser> {
+        // SAFETY: The `mm` field of `current` is not modified from other threads, so reading it is
+        // not a data race.
+        let mm = unsafe { (*self.as_ptr()).mm };
+
+        if mm.is_null() {
+            return None;
+        }
+
+        // SAFETY: If `current->mm` is non-null, then it references a valid mm with a non-zero
+        // value of `mm_users`. Furthermore, the returned `&MmWithUser` borrows from this
+        // `CurrentTask`, so it cannot escape the scope in which the current pointer was obtained.
+        //
+        // This is safe even if `kthread_use_mm()`/`kthread_unuse_mm()` are used. There are two
+        // relevant cases:
+        // * If the `&CurrentTask` was created before `kthread_use_mm()`, then it cannot be
+        //   accessed during the `kthread_use_mm()`/`kthread_unuse_mm()` scope due to the
+        //   `NotThreadSafe` field of `CurrentTask`.
+        // * If the `&CurrentTask` was created within a `kthread_use_mm()`/`kthread_unuse_mm()`
+        //   scope, then the `&CurrentTask` cannot escape that scope, so the returned `&MmWithUser`
+        //   also cannot escape that scope.
+        // In either case, it's not possible to read `current->mm` and keep using it after the
+        // scope is ended with `kthread_unuse_mm()`.
+        Some(unsafe { MmWithUser::from_raw(mm) })
+    }
+
+    /// Access the pid namespace of the current task.
+    ///
+    /// This function does not touch the refcount of the namespace or use RCU protection.
+    ///
+    /// To access the pid namespace of another task, see [`Task::get_pid_ns`].
+    #[doc(alias = "task_active_pid_ns")]
+    #[inline]
+    pub fn active_pid_ns(&self) -> Option<&PidNamespace> {
+        // SAFETY: It is safe to call `task_active_pid_ns` without RCU protection when calling it
+        // on the current task.
+        let active_ns = unsafe { bindings::task_active_pid_ns(self.as_ptr()) };
+
+        if active_ns.is_null() {
+            return None;
+        }
+
+        // The lifetime of `PidNamespace` is bound to `Task` and `struct pid`.
+        //
+        // The `PidNamespace` of a `Task` doesn't ever change once the `Task` is alive.
+        //
+        // From system call context retrieving the `PidNamespace` for the current task is always
+        // safe and requires neither RCU locking nor a reference count to be held. Retrieving the
+        // `PidNamespace` after `release_task()` for current will return `NULL` but no codepath
+        // like that is exposed to Rust.
+        //
+        // SAFETY: If `current`'s pid ns is non-null, then it references a valid pid ns.
+        // Furthermore, the returned `&PidNamespace` borrows from this `CurrentTask`, so it cannot
+        // escape the scope in which the current pointer was obtained, e.g. it cannot live past a
+        // `release_task()` call.
+        Some(unsafe { PidNamespace::from_ptr(active_ns) })
+    }
+}
+
 // SAFETY: The type invariants guarantee that `Task` is always refcounted.
 unsafe impl crate::types::AlwaysRefCounted for Task {
     fn inc_ref(&self) {

-- 
2.48.1.362.g079036d154-goog


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v13 2/8] mm: rust: add vm_area_struct methods that require read access
  2025-02-03 12:14 ` [PATCH v13 2/8] mm: rust: add vm_area_struct methods that require read access Alice Ryhl
@ 2025-02-03 15:44   ` Liam R. Howlett
  2025-02-04 12:45     ` Alice Ryhl
  2025-02-04 12:46     ` Alice Ryhl
  0 siblings, 2 replies; 23+ messages in thread
From: Liam R. Howlett @ 2025-02-03 15:44 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Miguel Ojeda, Matthew Wilcox, Lorenzo Stoakes, Vlastimil Babka,
	John Hubbard, Andrew Morton, Greg Kroah-Hartman, Arnd Bergmann,
	Jann Horn, Suren Baghdasaryan, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, linux-kernel, linux-mm, rust-for-linux

* Alice Ryhl <aliceryhl@google.com> [250203 07:15]:

Hi Alice,

These comments are probably for my education.  I'm holding back my
training on naming and trying to reconcile the snake-case and upper case
camel of the rust language.

> This adds a type called VmAreaRef which is used when referencing a vma

How did you land on VmAreaRef?  Why not VmaRef?  We mostly use vma for
vmas today, not vm_area.  Is it because the header declares it
vm_area_struct?  I'd be happier with VmaRef, but I'm guessing this is a
direct translation from vm_area_struct?

> that you have read access to. Here, read access means that you hold
> either the mmap read lock or the vma read lock (or stronger).

I have questions.. they are below.

> 
> Additionally, a vma_lookup method is added to the mmap read guard, which
> enables you to obtain a &VmAreaRef in safe Rust code.
> 
> This patch only provides a way to lock the mmap read lock, but a
> follow-up patch also provides a way to just lock the vma read lock.
> 
> Acked-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Reviewed-by: Jann Horn <jannh@google.com>
> Reviewed-by: Andreas Hindborg <a.hindborg@kernel.org>
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
> ---
>  rust/helpers/mm.c      |   6 ++
>  rust/kernel/mm.rs      |  21 +++++
>  rust/kernel/mm/virt.rs | 215 +++++++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 242 insertions(+)
> 
> diff --git a/rust/helpers/mm.c b/rust/helpers/mm.c
> index 7201747a5d31..7b72eb065a3e 100644
> --- a/rust/helpers/mm.c
> +++ b/rust/helpers/mm.c
> @@ -37,3 +37,9 @@ void rust_helper_mmap_read_unlock(struct mm_struct *mm)
>  {
>  	mmap_read_unlock(mm);
>  }
> +
> +struct vm_area_struct *rust_helper_vma_lookup(struct mm_struct *mm,
> +					      unsigned long addr)
> +{
> +	return vma_lookup(mm, addr);
> +}
> diff --git a/rust/kernel/mm.rs b/rust/kernel/mm.rs
> index 2fb5f440af60..bd6ff40f106f 100644
> --- a/rust/kernel/mm.rs
> +++ b/rust/kernel/mm.rs
> @@ -17,6 +17,8 @@
>  };
>  use core::{ops::Deref, ptr::NonNull};
>  
> +pub mod virt;
> +
>  /// A wrapper for the kernel's `struct mm_struct`.
>  ///
>  /// This represents the address space of a userspace process, so each process has one `Mm`
> @@ -200,6 +202,25 @@ pub struct MmapReadGuard<'a> {
>      _nts: NotThreadSafe,
>  }
>  
> +impl<'a> MmapReadGuard<'a> {
> +    /// Look up a vma at the given address.
> +    #[inline]
> +    pub fn vma_lookup(&self, vma_addr: usize) -> Option<&virt::VmAreaRef> {
> +        // SAFETY: We hold a reference to the mm, so the pointer must be valid. Any value is okay
> +        // for `vma_addr`.

Is this true?  In C we hold a reference to the mm and the vma can still
go away.  We get safety from the locking on the C side.

> +        let vma = unsafe { bindings::vma_lookup(self.mm.as_raw(), vma_addr) };
> +
> +        if vma.is_null() {
> +            None
> +        } else {
> +            // SAFETY: We just checked that a vma was found, so the pointer is valid. Furthermore,
> +            // the returned area will borrow from this read lock guard, so it can only be used
> +            // while the mmap read lock is still held.

So We have complicated the locking of the vmas with rcu and per-vma
locking recently.  We are now able to look up and use a vma under the
rcu read lock.  Does this translate to rust model?

I believe this is true in recent version of binder as well?

> +            unsafe { Some(virt::VmAreaRef::from_raw(vma)) }
> +        }
> +    }
> +}
> +
>  impl Drop for MmapReadGuard<'_> {
>      #[inline]
>      fn drop(&mut self) {
> diff --git a/rust/kernel/mm/virt.rs b/rust/kernel/mm/virt.rs
> new file mode 100644
> index 000000000000..dfe147cafdb3
> --- /dev/null
> +++ b/rust/kernel/mm/virt.rs
> @@ -0,0 +1,215 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +// Copyright (C) 2024 Google LLC.
> +
> +//! Virtual memory.
> +//!
> +//! This module deals with managing a single VMA in the address space of a userspace process. Each
> +//! VMA corresponds to a region of memory that the userspace process can access, and the VMA lets
> +//! you control what happens when userspace reads or writes to that region of memory.
> +//!
> +//! The module has several different Rust types that all correspond to the C type called
> +//! `vm_area_struct`. The different structs represent what kind of access you have to the VMA, e.g.
> +//! [`VmAreaRef`] is used when you hold the mmap or vma read lock. Using the appropriate struct
> +//! ensures that you can't, for example, accidentally call a function that requires holding the
> +//! write lock when you only hold the read lock.
> +
> +use crate::{bindings, mm::MmWithUser, types::Opaque};
> +
> +/// A wrapper for the kernel's `struct vm_area_struct` with read access.
> +///
> +/// It represents an area of virtual memory.
> +///
> +/// # Invariants
> +///
> +/// The caller must hold the mmap read lock or the vma read lock.
> +#[repr(transparent)]
> +pub struct VmAreaRef {
> +    vma: Opaque<bindings::vm_area_struct>,
> +}
> +
> +// Methods you can call when holding the mmap or vma read lock (or stronger). They must be usable
> +// no matter what the vma flags are.
> +impl VmAreaRef {
> +    /// Access a virtual memory area given a raw pointer.
> +    ///
> +    /// # Safety
> +    ///
> +    /// Callers must ensure that `vma` is valid for the duration of 'a, and that the mmap or vma
> +    /// read lock (or stronger) is held for at least the duration of 'a.

What is 'a?

> +    #[inline]
> +    pub unsafe fn from_raw<'a>(vma: *const bindings::vm_area_struct) -> &'a Self {

This 'a?  What can I look up in rust to know what this is about?

> +        // SAFETY: The caller ensures that the invariants are satisfied for the duration of 'a.
> +        unsafe { &*vma.cast() }
> +    }
> +
> +    /// Returns a raw pointer to this area.
> +    #[inline]
> +    pub fn as_ptr(&self) -> *mut bindings::vm_area_struct {
> +        self.vma.get()
> +    }
> +
> +    /// Access the underlying `mm_struct`.
> +    #[inline]
> +    pub fn mm(&self) -> &MmWithUser {
> +        // SAFETY: By the type invariants, this `vm_area_struct` is valid and we hold the mmap/vma
> +        // read lock or stronger. This implies that the underlying mm has a non-zero value of
> +        // `mm_users`.

Again, I'm not sure this statement still holds as it once did?

> +        unsafe { MmWithUser::from_raw((*self.as_ptr()).vm_mm) }
> +    }
> +
> +    /// Returns the flags associated with the virtual memory area.
> +    ///
> +    /// The possible flags are a combination of the constants in [`flags`].
> +    #[inline]
> +    pub fn flags(&self) -> vm_flags_t {
> +        // SAFETY: By the type invariants, the caller holds at least the mmap read lock, so this
> +        // access is not a data race.
> +        unsafe { (*self.as_ptr()).__bindgen_anon_2.vm_flags }
> +    }
> +
> +    /// Returns the (inclusive) start address of the virtual memory area.
> +    #[inline]
> +    pub fn start(&self) -> usize {
> +        // SAFETY: By the type invariants, the caller holds at least the mmap read lock, so this
> +        // access is not a data race.
> +        unsafe { (*self.as_ptr()).__bindgen_anon_1.__bindgen_anon_1.vm_start }
> +    }
> +
> +    /// Returns the (exclusive) end address of the virtual memory area.
> +    #[inline]
> +    pub fn end(&self) -> usize {
> +        // SAFETY: By the type invariants, the caller holds at least the mmap read lock, so this
> +        // access is not a data race.
> +        unsafe { (*self.as_ptr()).__bindgen_anon_1.__bindgen_anon_1.vm_end }
> +    }
> +
> +    /// Zap pages in the given page range.
> +    ///
> +    /// This clears page table mappings for the range at the leaf level, leaving all other page
> +    /// tables intact, and freeing any memory referenced by the VMA in this range. That is,
> +    /// anonymous memory is completely freed, file-backed memory has its reference count on page
> +    /// cache folio's dropped, any dirty data will still be written back to disk as usual.
> +    ///
> +    /// It may seem odd that we clear at the leaf level, this is however a product of the page
> +    /// table structure used to map physical memory into a virtual address space - each virtual
> +    /// address actually consists of a bitmap of array indices into page tables, which form a
> +    /// hierarchical page table level structure.
> +    ///
> +    /// As a result, each page table level maps a multiple of page table levels below, and thus
> +    /// span ever increasing ranges of pages. At the leaf or PTE level, we map the actual physical
> +    /// memory.
> +    ///
> +    /// It is here where a zap operates, as it the only place we can be certain of clearing without
> +    /// impacting any other virtual mappings. It is an implementation detail as to whether the
> +    /// kernel goes further in freeing unused page tables, but for the purposes of this operation
> +    /// we must only assume that the leaf level is cleared.
> +    #[inline]
> +    pub fn zap_page_range_single(&self, address: usize, size: usize) {
> +        let (end, did_overflow) = address.overflowing_add(size);
> +        if did_overflow || address < self.start() || self.end() < end {
> +            // TODO: call WARN_ONCE once Rust version of it is added
> +            return;
> +        }
> +
> +        // SAFETY: By the type invariants, the caller has read access to this VMA, which is
> +        // sufficient for this method call. This method has no requirements on the vma flags. The
> +        // address range is checked to be within the vma.
> +        unsafe {
> +            bindings::zap_page_range_single(
> +                self.as_ptr(),
> +                address,
> +                size,
> +                core::ptr::null_mut(),
> +            )
> +        };
> +    }
> +}
> +
> +/// The integer type used for vma flags.
> +#[doc(inline)]
> +pub use bindings::vm_flags_t;
> +
> +/// All possible flags for [`VmAreaRef`].
> +pub mod flags {
> +    use super::vm_flags_t;
> +    use crate::bindings;
> +
> +    /// No flags are set.
> +    pub const NONE: vm_flags_t = bindings::VM_NONE as _;
> +
> +    /// Mapping allows reads.
> +    pub const READ: vm_flags_t = bindings::VM_READ as _;
> +
> +    /// Mapping allows writes.
> +    pub const WRITE: vm_flags_t = bindings::VM_WRITE as _;
> +
> +    /// Mapping allows execution.
> +    pub const EXEC: vm_flags_t = bindings::VM_EXEC as _;
> +
> +    /// Mapping is shared.
> +    pub const SHARED: vm_flags_t = bindings::VM_SHARED as _;
> +
> +    /// Mapping may be updated to allow reads.
> +    pub const MAYREAD: vm_flags_t = bindings::VM_MAYREAD as _;
> +
> +    /// Mapping may be updated to allow writes.
> +    pub const MAYWRITE: vm_flags_t = bindings::VM_MAYWRITE as _;
> +
> +    /// Mapping may be updated to allow execution.
> +    pub const MAYEXEC: vm_flags_t = bindings::VM_MAYEXEC as _;
> +
> +    /// Mapping may be updated to be shared.
> +    pub const MAYSHARE: vm_flags_t = bindings::VM_MAYSHARE as _;
> +
> +    /// Page-ranges managed without `struct page`, just pure PFN.
> +    pub const PFNMAP: vm_flags_t = bindings::VM_PFNMAP as _;
> +
> +    /// Memory mapped I/O or similar.
> +    pub const IO: vm_flags_t = bindings::VM_IO as _;
> +
> +    /// Do not copy this vma on fork.
> +    pub const DONTCOPY: vm_flags_t = bindings::VM_DONTCOPY as _;
> +
> +    /// Cannot expand with mremap().
> +    pub const DONTEXPAND: vm_flags_t = bindings::VM_DONTEXPAND as _;
> +
> +    /// Lock the pages covered when they are faulted in.
> +    pub const LOCKONFAULT: vm_flags_t = bindings::VM_LOCKONFAULT as _;
> +
> +    /// Is a VM accounted object.
> +    pub const ACCOUNT: vm_flags_t = bindings::VM_ACCOUNT as _;
> +
> +    /// Should the VM suppress accounting.
> +    pub const NORESERVE: vm_flags_t = bindings::VM_NORESERVE as _;
> +
> +    /// Huge TLB Page VM.
> +    pub const HUGETLB: vm_flags_t = bindings::VM_HUGETLB as _;
> +
> +    /// Synchronous page faults. (DAX-specific)
> +    pub const SYNC: vm_flags_t = bindings::VM_SYNC as _;
> +
> +    /// Architecture-specific flag.
> +    pub const ARCH_1: vm_flags_t = bindings::VM_ARCH_1 as _;
> +
> +    /// Wipe VMA contents in child on fork.
> +    pub const WIPEONFORK: vm_flags_t = bindings::VM_WIPEONFORK as _;
> +
> +    /// Do not include in the core dump.
> +    pub const DONTDUMP: vm_flags_t = bindings::VM_DONTDUMP as _;
> +
> +    /// Not soft dirty clean area.
> +    pub const SOFTDIRTY: vm_flags_t = bindings::VM_SOFTDIRTY as _;
> +
> +    /// Can contain `struct page` and pure PFN pages.
> +    pub const MIXEDMAP: vm_flags_t = bindings::VM_MIXEDMAP as _;
> +
> +    /// MADV_HUGEPAGE marked this vma.
> +    pub const HUGEPAGE: vm_flags_t = bindings::VM_HUGEPAGE as _;
> +
> +    /// MADV_NOHUGEPAGE marked this vma.
> +    pub const NOHUGEPAGE: vm_flags_t = bindings::VM_NOHUGEPAGE as _;
> +
> +    /// KSM may merge identical pages.
> +    pub const MERGEABLE: vm_flags_t = bindings::VM_MERGEABLE as _;
> +}
> 
> -- 
> 2.48.1.362.g079036d154-goog
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v13 2/8] mm: rust: add vm_area_struct methods that require read access
  2025-02-03 15:44   ` Liam R. Howlett
@ 2025-02-04 12:45     ` Alice Ryhl
  2025-02-04 14:55       ` Liam R. Howlett
  2025-02-04 12:46     ` Alice Ryhl
  1 sibling, 1 reply; 23+ messages in thread
From: Alice Ryhl @ 2025-02-04 12:45 UTC (permalink / raw)
  To: Liam R. Howlett, Alice Ryhl, Miguel Ojeda, Matthew Wilcox,
	Lorenzo Stoakes, Vlastimil Babka, John Hubbard, Andrew Morton,
	Greg Kroah-Hartman, Arnd Bergmann, Jann Horn, Suren Baghdasaryan,
	Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Trevor Gross, linux-kernel,
	linux-mm, rust-for-linux

On Mon, Feb 3, 2025 at 4:44 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
>
> * Alice Ryhl <aliceryhl@google.com> [250203 07:15]:
>
> Hi Alice,
>
> These comments are probably for my education.  I'm holding back my
> training on naming and trying to reconcile the snake-case and upper case
> camel of the rust language.
>
> > This adds a type called VmAreaRef which is used when referencing a vma
>
> How did you land on VmAreaRef?  Why not VmaRef?  We mostly use vma for
> vmas today, not vm_area.  Is it because the header declares it
> vm_area_struct?  I'd be happier with VmaRef, but I'm guessing this is a
> direct translation from vm_area_struct?
>
> > that you have read access to. Here, read access means that you hold
> > either the mmap read lock or the vma read lock (or stronger).
>
> I have questions.. they are below.
>
> >
> > Additionally, a vma_lookup method is added to the mmap read guard, which
> > enables you to obtain a &VmAreaRef in safe Rust code.
> >
> > This patch only provides a way to lock the mmap read lock, but a
> > follow-up patch also provides a way to just lock the vma read lock.
> >
> > Acked-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> > Reviewed-by: Jann Horn <jannh@google.com>
> > Reviewed-by: Andreas Hindborg <a.hindborg@kernel.org>
> > Signed-off-by: Alice Ryhl <aliceryhl@google.com>
> > ---
> >  rust/helpers/mm.c      |   6 ++
> >  rust/kernel/mm.rs      |  21 +++++
> >  rust/kernel/mm/virt.rs | 215 +++++++++++++++++++++++++++++++++++++++++++++++++
> >  3 files changed, 242 insertions(+)
> >
> > diff --git a/rust/helpers/mm.c b/rust/helpers/mm.c
> > index 7201747a5d31..7b72eb065a3e 100644
> > --- a/rust/helpers/mm.c
> > +++ b/rust/helpers/mm.c
> > @@ -37,3 +37,9 @@ void rust_helper_mmap_read_unlock(struct mm_struct *mm)
> >  {
> >       mmap_read_unlock(mm);
> >  }
> > +
> > +struct vm_area_struct *rust_helper_vma_lookup(struct mm_struct *mm,
> > +                                           unsigned long addr)
> > +{
> > +     return vma_lookup(mm, addr);
> > +}
> > diff --git a/rust/kernel/mm.rs b/rust/kernel/mm.rs
> > index 2fb5f440af60..bd6ff40f106f 100644
> > --- a/rust/kernel/mm.rs
> > +++ b/rust/kernel/mm.rs
> > @@ -17,6 +17,8 @@
> >  };
> >  use core::{ops::Deref, ptr::NonNull};
> >
> > +pub mod virt;
> > +
> >  /// A wrapper for the kernel's `struct mm_struct`.
> >  ///
> >  /// This represents the address space of a userspace process, so each process has one `Mm`
> > @@ -200,6 +202,25 @@ pub struct MmapReadGuard<'a> {
> >      _nts: NotThreadSafe,
> >  }
> >
> > +impl<'a> MmapReadGuard<'a> {
> > +    /// Look up a vma at the given address.
> > +    #[inline]
> > +    pub fn vma_lookup(&self, vma_addr: usize) -> Option<&virt::VmAreaRef> {
> > +        // SAFETY: We hold a reference to the mm, so the pointer must be valid. Any value is okay
> > +        // for `vma_addr`.
>
> Is this true?  In C we hold a reference to the mm and the vma can still
> go away.  We get safety from the locking on the C side.

Notice that this function is in the `impl MmapReadGuard` block. This
implies that you *must* hold the mmap read guard to call this
function.

The safety comment should probably be updated to mention that we hold the guard.

> > +        let vma = unsafe { bindings::vma_lookup(self.mm.as_raw(), vma_addr) };
> > +
> > +        if vma.is_null() {
> > +            None
> > +        } else {
> > +            // SAFETY: We just checked that a vma was found, so the pointer is valid. Furthermore,
> > +            // the returned area will borrow from this read lock guard, so it can only be used
> > +            // while the mmap read lock is still held.
>
> So We have complicated the locking of the vmas with rcu and per-vma
> locking recently.  We are now able to look up and use a vma under the
> rcu read lock.  Does this translate to rust model?
>
> I believe this is true in recent version of binder as well?

Yes. The safety requirements of VmAreaRef is that you must hold the
mmap read lock *or* the vma read lock while you have a VmAreaRef
reference. This particular method achieves that requirement by holding
the mmap read lock. But there is also a Rust lock_vma_under_rcu(), see
patch 4 for that.

> > +// Methods you can call when holding the mmap or vma read lock (or stronger). They must be usable
> > +// no matter what the vma flags are.
> > +impl VmAreaRef {
> > +    /// Access a virtual memory area given a raw pointer.
> > +    ///
> > +    /// # Safety
> > +    ///
> > +    /// Callers must ensure that `vma` is valid for the duration of 'a, and that the mmap or vma
> > +    /// read lock (or stronger) is held for at least the duration of 'a.
>
> What is 'a?
>
> > +    #[inline]
> > +    pub unsafe fn from_raw<'a>(vma: *const bindings::vm_area_struct) -> &'a Self {
>
> This 'a?  What can I look up in rust to know what this is about?

This is called a lifetime. It represents some region of code, and the
idea is that the returned reference is guaranteed valid inside the
lifetime 'a. For this particular function, the caller chooses what the
lifetime is completely unsafely, so there is no check, but in other
cases it's enforced by the compiler.

For example, if we take vma_lookup from above, then its signature is
short-hand [1] for this:

pub fn vma_lookup<'a>(&'a self, vma_addr: usize) -> Option<&'a
virt::VmAreaRef> {

This signature tells the compiler that the returned pointer to
VmAreaRef can only be used in the region of code where `self` is also
valid. In this case `self` is an `MmapReadLock` (because that's the
impl block the function is defined inside), so the returned VmAreaRef
is valid in the same region as the MmapReadLock object. This enforces
that the returned pointer can only be used while you still hold the
read lock. If your code does not follow that requirement, then that's
a compilation error.

[1]: https://doc.rust-lang.org/book/ch10-03-lifetime-syntax.html#lifetime-elision

> > +        // SAFETY: The caller ensures that the invariants are satisfied for the duration of 'a.
> > +        unsafe { &*vma.cast() }
> > +    }
> > +
> > +    /// Returns a raw pointer to this area.
> > +    #[inline]
> > +    pub fn as_ptr(&self) -> *mut bindings::vm_area_struct {
> > +        self.vma.get()
> > +    }
> > +
> > +    /// Access the underlying `mm_struct`.
> > +    #[inline]
> > +    pub fn mm(&self) -> &MmWithUser {
> > +        // SAFETY: By the type invariants, this `vm_area_struct` is valid and we hold the mmap/vma
> > +        // read lock or stronger. This implies that the underlying mm has a non-zero value of
> > +        // `mm_users`.
>
> Again, I'm not sure this statement still holds as it once did?

If it's possible to hold the mmap/vma read lock for a vma that is part
of an mm whose mm_users count has dropped to zero, then it is
incorrect for us to have this method.


Alice

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v13 2/8] mm: rust: add vm_area_struct methods that require read access
  2025-02-03 15:44   ` Liam R. Howlett
  2025-02-04 12:45     ` Alice Ryhl
@ 2025-02-04 12:46     ` Alice Ryhl
  2025-02-04 14:58       ` Liam R. Howlett
  1 sibling, 1 reply; 23+ messages in thread
From: Alice Ryhl @ 2025-02-04 12:46 UTC (permalink / raw)
  To: Liam R. Howlett, Alice Ryhl, Miguel Ojeda, Matthew Wilcox,
	Lorenzo Stoakes, Vlastimil Babka, John Hubbard, Andrew Morton,
	Greg Kroah-Hartman, Arnd Bergmann, Jann Horn, Suren Baghdasaryan,
	Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Trevor Gross, linux-kernel,
	linux-mm, rust-for-linux

On Mon, Feb 3, 2025 at 4:44 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
>
> * Alice Ryhl <aliceryhl@google.com> [250203 07:15]:
>
> Hi Alice,
>
> These comments are probably for my education.  I'm holding back my
> training on naming and trying to reconcile the snake-case and upper case
> camel of the rust language.
>
> > This adds a type called VmAreaRef which is used when referencing a vma
>
> How did you land on VmAreaRef?  Why not VmaRef?  We mostly use vma for
> vmas today, not vm_area.  Is it because the header declares it
> vm_area_struct?  I'd be happier with VmaRef, but I'm guessing this is a
> direct translation from vm_area_struct?

Sorry missed this on my first pass.

Yes, it follows the C name. Happy to rename to VmaRef.

Alice

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v13 2/8] mm: rust: add vm_area_struct methods that require read access
  2025-02-04 12:45     ` Alice Ryhl
@ 2025-02-04 14:55       ` Liam R. Howlett
  2025-02-04 15:05         ` Alice Ryhl
  0 siblings, 1 reply; 23+ messages in thread
From: Liam R. Howlett @ 2025-02-04 14:55 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Miguel Ojeda, Matthew Wilcox, Lorenzo Stoakes, Vlastimil Babka,
	John Hubbard, Andrew Morton, Greg Kroah-Hartman, Arnd Bergmann,
	Jann Horn, Suren Baghdasaryan, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, linux-kernel, linux-mm, rust-for-linux

* Alice Ryhl <aliceryhl@google.com> [250204 07:45]:
> On Mon, Feb 3, 2025 at 4:44 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
> >
...

> > >
> > > +impl<'a> MmapReadGuard<'a> {
> > > +    /// Look up a vma at the given address.
> > > +    #[inline]
> > > +    pub fn vma_lookup(&self, vma_addr: usize) -> Option<&virt::VmAreaRef> {
> > > +        // SAFETY: We hold a reference to the mm, so the pointer must be valid. Any value is okay
> > > +        // for `vma_addr`.
> >
> > Is this true?  In C we hold a reference to the mm and the vma can still
> > go away.  We get safety from the locking on the C side.
> 
> Notice that this function is in the `impl MmapReadGuard` block. This
> implies that you *must* hold the mmap read guard to call this
> function.
> 
> The safety comment should probably be updated to mention that we hold the guard.

Thanks.

Does it imply you must hold the lock or does it enforce that the lock is
held?


> 
> > > +        let vma = unsafe { bindings::vma_lookup(self.mm.as_raw(), vma_addr) };
> > > +
> > > +        if vma.is_null() {
> > > +            None
> > > +        } else {
> > > +            // SAFETY: We just checked that a vma was found, so the pointer is valid. Furthermore,
> > > +            // the returned area will borrow from this read lock guard, so it can only be used
> > > +            // while the mmap read lock is still held.
> >
> > So We have complicated the locking of the vmas with rcu and per-vma
> > locking recently.  We are now able to look up and use a vma under the
> > rcu read lock.  Does this translate to rust model?
> >
> > I believe this is true in recent version of binder as well?
> 
> Yes. The safety requirements of VmAreaRef is that you must hold the
> mmap read lock *or* the vma read lock while you have a VmAreaRef
> reference. This particular method achieves that requirement by holding
> the mmap read lock. But there is also a Rust lock_vma_under_rcu(), see
> patch 4 for that.

Right, okay.  Thanks.  You can get the reference by only holding the rcu
read lock, but you should hold the vma lock to ensure that the vma
itself (and not just the pointer) is safe to use.

...

> 
> > > +        // SAFETY: The caller ensures that the invariants are satisfied for the duration of 'a.
> > > +        unsafe { &*vma.cast() }
> > > +    }
> > > +
> > > +    /// Returns a raw pointer to this area.
> > > +    #[inline]
> > > +    pub fn as_ptr(&self) -> *mut bindings::vm_area_struct {
> > > +        self.vma.get()
> > > +    }
> > > +
> > > +    /// Access the underlying `mm_struct`.
> > > +    #[inline]
> > > +    pub fn mm(&self) -> &MmWithUser {
> > > +        // SAFETY: By the type invariants, this `vm_area_struct` is valid and we hold the mmap/vma
> > > +        // read lock or stronger. This implies that the underlying mm has a non-zero value of
> > > +        // `mm_users`.
> >
> > Again, I'm not sure this statement still holds as it once did?
> 
> If it's possible to hold the mmap/vma read lock for a vma that is part
> of an mm whose mm_users count has dropped to zero, then it is
> incorrect for us to have this method.

The non-zero statement is fine.  After re-reading it, I think this is
accurate.

Thanks for taking the time to explain things.
Liam

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v13 2/8] mm: rust: add vm_area_struct methods that require read access
  2025-02-04 12:46     ` Alice Ryhl
@ 2025-02-04 14:58       ` Liam R. Howlett
  0 siblings, 0 replies; 23+ messages in thread
From: Liam R. Howlett @ 2025-02-04 14:58 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Miguel Ojeda, Matthew Wilcox, Lorenzo Stoakes, Vlastimil Babka,
	John Hubbard, Andrew Morton, Greg Kroah-Hartman, Arnd Bergmann,
	Jann Horn, Suren Baghdasaryan, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, linux-kernel, linux-mm, rust-for-linux

* Alice Ryhl <aliceryhl@google.com> [250204 07:46]:
> On Mon, Feb 3, 2025 at 4:44 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
> >
> > * Alice Ryhl <aliceryhl@google.com> [250203 07:15]:
> >
> > Hi Alice,
> >
> > These comments are probably for my education.  I'm holding back my
> > training on naming and trying to reconcile the snake-case and upper case
> > camel of the rust language.
> >
> > > This adds a type called VmAreaRef which is used when referencing a vma
> >
> > How did you land on VmAreaRef?  Why not VmaRef?  We mostly use vma for
> > vmas today, not vm_area.  Is it because the header declares it
> > vm_area_struct?  I'd be happier with VmaRef, but I'm guessing this is a
> > direct translation from vm_area_struct?
> 
> Sorry missed this on my first pass.
> 
> Yes, it follows the C name. Happy to rename to VmaRef.

I don't want to cause churn on a v13 patch set and I've lived with the
verbose vm_area_struct..

It does cause function declarations to split lines frequently.

It's up to you, I'll accept it whatever way you decide.

Thanks,
Liam

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v13 2/8] mm: rust: add vm_area_struct methods that require read access
  2025-02-04 14:55       ` Liam R. Howlett
@ 2025-02-04 15:05         ` Alice Ryhl
  2025-02-04 15:46           ` Liam R. Howlett
  0 siblings, 1 reply; 23+ messages in thread
From: Alice Ryhl @ 2025-02-04 15:05 UTC (permalink / raw)
  To: Liam R. Howlett, Alice Ryhl, Miguel Ojeda, Matthew Wilcox,
	Lorenzo Stoakes, Vlastimil Babka, John Hubbard, Andrew Morton,
	Greg Kroah-Hartman, Arnd Bergmann, Jann Horn, Suren Baghdasaryan,
	Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Trevor Gross, linux-kernel,
	linux-mm, rust-for-linux

On Tue, Feb 4, 2025 at 3:55 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
>
> * Alice Ryhl <aliceryhl@google.com> [250204 07:45]:
> > On Mon, Feb 3, 2025 at 4:44 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
> > >
> ...
>
> > > >
> > > > +impl<'a> MmapReadGuard<'a> {
> > > > +    /// Look up a vma at the given address.
> > > > +    #[inline]
> > > > +    pub fn vma_lookup(&self, vma_addr: usize) -> Option<&virt::VmAreaRef> {
> > > > +        // SAFETY: We hold a reference to the mm, so the pointer must be valid. Any value is okay
> > > > +        // for `vma_addr`.
> > >
> > > Is this true?  In C we hold a reference to the mm and the vma can still
> > > go away.  We get safety from the locking on the C side.
> >
> > Notice that this function is in the `impl MmapReadGuard` block. This
> > implies that you *must* hold the mmap read guard to call this
> > function.
> >
> > The safety comment should probably be updated to mention that we hold the guard.
>
> Thanks.
>
> Does it imply you must hold the lock or does it enforce that the lock is
> held?

Not sure what distinction you're asking about. To obtain a
MmapReadGuard object, you need to call the `mmap_read_lock` function,
which locks the mmap lock and returns an MmapReadGuard. When the
MmapReadGuard object goes out of scope, its destructor runs which
unlocks the lock. You can only call this method given a reference to a
MmapReadGuard, so you can only call it after calling mmap_read_lock
and before its destructor runs.

> > > > +        let vma = unsafe { bindings::vma_lookup(self.mm.as_raw(), vma_addr) };
> > > > +
> > > > +        if vma.is_null() {
> > > > +            None
> > > > +        } else {
> > > > +            // SAFETY: We just checked that a vma was found, so the pointer is valid. Furthermore,
> > > > +            // the returned area will borrow from this read lock guard, so it can only be used
> > > > +            // while the mmap read lock is still held.
> > >
> > > So We have complicated the locking of the vmas with rcu and per-vma
> > > locking recently.  We are now able to look up and use a vma under the
> > > rcu read lock.  Does this translate to rust model?
> > >
> > > I believe this is true in recent version of binder as well?
> >
> > Yes. The safety requirements of VmAreaRef is that you must hold the
> > mmap read lock *or* the vma read lock while you have a VmAreaRef
> > reference. This particular method achieves that requirement by holding
> > the mmap read lock. But there is also a Rust lock_vma_under_rcu(), see
> > patch 4 for that.
>
> Right, okay.  Thanks.  You can get the reference by only holding the rcu
> read lock, but you should hold the vma lock to ensure that the vma
> itself (and not just the pointer) is safe to use.

Hmm... To modify the vma, you must hold the mmap *and* vma write lock,
so holding the mmap read lock prevents mutations? Is the mmap read
lock not enough to read from the vma?

> > > > +        // SAFETY: The caller ensures that the invariants are satisfied for the duration of 'a.
> > > > +        unsafe { &*vma.cast() }
> > > > +    }
> > > > +
> > > > +    /// Returns a raw pointer to this area.
> > > > +    #[inline]
> > > > +    pub fn as_ptr(&self) -> *mut bindings::vm_area_struct {
> > > > +        self.vma.get()
> > > > +    }
> > > > +
> > > > +    /// Access the underlying `mm_struct`.
> > > > +    #[inline]
> > > > +    pub fn mm(&self) -> &MmWithUser {
> > > > +        // SAFETY: By the type invariants, this `vm_area_struct` is valid and we hold the mmap/vma
> > > > +        // read lock or stronger. This implies that the underlying mm has a non-zero value of
> > > > +        // `mm_users`.
> > >
> > > Again, I'm not sure this statement still holds as it once did?
> >
> > If it's possible to hold the mmap/vma read lock for a vma that is part
> > of an mm whose mm_users count has dropped to zero, then it is
> > incorrect for us to have this method.
>
> The non-zero statement is fine.  After re-reading it, I think this is
> accurate.

Okay, thanks!

> Thanks for taking the time to explain things.

Of course!

Alice

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v13 2/8] mm: rust: add vm_area_struct methods that require read access
  2025-02-04 15:05         ` Alice Ryhl
@ 2025-02-04 15:46           ` Liam R. Howlett
  2025-02-05 12:10             ` Alice Ryhl
  0 siblings, 1 reply; 23+ messages in thread
From: Liam R. Howlett @ 2025-02-04 15:46 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Miguel Ojeda, Matthew Wilcox, Lorenzo Stoakes, Vlastimil Babka,
	John Hubbard, Andrew Morton, Greg Kroah-Hartman, Arnd Bergmann,
	Jann Horn, Suren Baghdasaryan, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, linux-kernel, linux-mm, rust-for-linux

* Alice Ryhl <aliceryhl@google.com> [250204 10:06]:
> On Tue, Feb 4, 2025 at 3:55 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
> >
> > * Alice Ryhl <aliceryhl@google.com> [250204 07:45]:
> > > On Mon, Feb 3, 2025 at 4:44 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
> > > >
> > ...
> >
> > > > >
> > > > > +impl<'a> MmapReadGuard<'a> {
> > > > > +    /// Look up a vma at the given address.
> > > > > +    #[inline]
> > > > > +    pub fn vma_lookup(&self, vma_addr: usize) -> Option<&virt::VmAreaRef> {
> > > > > +        // SAFETY: We hold a reference to the mm, so the pointer must be valid. Any value is okay
> > > > > +        // for `vma_addr`.
> > > >
> > > > Is this true?  In C we hold a reference to the mm and the vma can still
> > > > go away.  We get safety from the locking on the C side.
> > >
> > > Notice that this function is in the `impl MmapReadGuard` block. This
> > > implies that you *must* hold the mmap read guard to call this
> > > function.
> > >
> > > The safety comment should probably be updated to mention that we hold the guard.
> >
> > Thanks.
> >
> > Does it imply you must hold the lock or does it enforce that the lock is
> > held?
> 
> Not sure what distinction you're asking about. To obtain a
> MmapReadGuard object, you need to call the `mmap_read_lock` function,
> which locks the mmap lock and returns an MmapReadGuard. When the
> MmapReadGuard object goes out of scope, its destructor runs which
> unlocks the lock. You can only call this method given a reference to a
> MmapReadGuard, so you can only call it after calling mmap_read_lock
> and before its destructor runs.

I was wondering if the lock is ensured to be held.

Thanks for walking me through the way this works.

> 
> > > > > +        let vma = unsafe { bindings::vma_lookup(self.mm.as_raw(), vma_addr) };
> > > > > +
> > > > > +        if vma.is_null() {
> > > > > +            None
> > > > > +        } else {
> > > > > +            // SAFETY: We just checked that a vma was found, so the pointer is valid. Furthermore,
> > > > > +            // the returned area will borrow from this read lock guard, so it can only be used
> > > > > +            // while the mmap read lock is still held.
> > > >
> > > > So We have complicated the locking of the vmas with rcu and per-vma
> > > > locking recently.  We are now able to look up and use a vma under the
> > > > rcu read lock.  Does this translate to rust model?
> > > >
> > > > I believe this is true in recent version of binder as well?
> > >
> > > Yes. The safety requirements of VmAreaRef is that you must hold the
> > > mmap read lock *or* the vma read lock while you have a VmAreaRef
> > > reference. This particular method achieves that requirement by holding
> > > the mmap read lock. But there is also a Rust lock_vma_under_rcu(), see
> > > patch 4 for that.
> >
> > Right, okay.  Thanks.  You can get the reference by only holding the rcu
> > read lock, but you should hold the vma lock to ensure that the vma
> > itself (and not just the pointer) is safe to use.
> 
> Hmm... To modify the vma, you must hold the mmap *and* vma write lock,
> so holding the mmap read lock prevents mutations?

Sorry, I think I confused things with my answer.  Your code is fine.
The phrasing of the "only be used while the mmap read lock is still
held" made me wonder about further clarification on the locking here
(because the locking is confusing).

Yes, mmap read lock means there are no writers that can modify the vma.
Essentially, you are using the lock to ensure the entire vma space isn't
changed during your operation - which is heavier than just locking one
vma.

> Is the mmap read
> lock not enough to read from the vma?

Yes, it's enough.

The vma write lock is needed to ensure there is no race between an rcu
reader using the vma and the mmap writer modifying the vma.  This is
done using the vma lock.

I am confident you already knew this and I'm just saying it now for
people who find this conversation on the mailing list.

...

Thanks,
Liam

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v13 2/8] mm: rust: add vm_area_struct methods that require read access
  2025-02-04 15:46           ` Liam R. Howlett
@ 2025-02-05 12:10             ` Alice Ryhl
  2025-02-05 14:37               ` Liam R. Howlett
  0 siblings, 1 reply; 23+ messages in thread
From: Alice Ryhl @ 2025-02-05 12:10 UTC (permalink / raw)
  To: Liam R. Howlett, Alice Ryhl, Miguel Ojeda, Matthew Wilcox,
	Lorenzo Stoakes, Vlastimil Babka, John Hubbard, Andrew Morton,
	Greg Kroah-Hartman, Arnd Bergmann, Jann Horn, Suren Baghdasaryan,
	Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Trevor Gross, linux-kernel,
	linux-mm, rust-for-linux

On Tue, Feb 4, 2025 at 4:46 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
> > > > > > +        let vma = unsafe { bindings::vma_lookup(self.mm.as_raw(), vma_addr) };
> > > > > > +
> > > > > > +        if vma.is_null() {
> > > > > > +            None
> > > > > > +        } else {
> > > > > > +            // SAFETY: We just checked that a vma was found, so the pointer is valid. Furthermore,
> > > > > > +            // the returned area will borrow from this read lock guard, so it can only be used
> > > > > > +            // while the mmap read lock is still held.
> > > > >
> > > > > So We have complicated the locking of the vmas with rcu and per-vma
> > > > > locking recently.  We are now able to look up and use a vma under the
> > > > > rcu read lock.  Does this translate to rust model?
> > > > >
> > > > > I believe this is true in recent version of binder as well?
> > > >
> > > > Yes. The safety requirements of VmAreaRef is that you must hold the
> > > > mmap read lock *or* the vma read lock while you have a VmAreaRef
> > > > reference. This particular method achieves that requirement by holding
> > > > the mmap read lock. But there is also a Rust lock_vma_under_rcu(), see
> > > > patch 4 for that.
> > >
> > > Right, okay.  Thanks.  You can get the reference by only holding the rcu
> > > read lock, but you should hold the vma lock to ensure that the vma
> > > itself (and not just the pointer) is safe to use.
> >
> > Hmm... To modify the vma, you must hold the mmap *and* vma write lock,
> > so holding the mmap read lock prevents mutations?
>
> Sorry, I think I confused things with my answer.  Your code is fine.
> The phrasing of the "only be used while the mmap read lock is still
> held" made me wonder about further clarification on the locking here
> (because the locking is confusing).
>
> Yes, mmap read lock means there are no writers that can modify the vma.
> Essentially, you are using the lock to ensure the entire vma space isn't
> changed during your operation - which is heavier than just locking one
> vma.

I could extend the safety comment like this:

SAFETY: We just checked that a vma was found, so the pointer is valid.
Furthermore, the returned area will borrow from this read lock guard,
so it can only be used while the mmap read lock is still held. This
ensures that there are no writers because writers must hold both the
mmap and vma write lock.

Alice

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v13 2/8] mm: rust: add vm_area_struct methods that require read access
  2025-02-05 12:10             ` Alice Ryhl
@ 2025-02-05 14:37               ` Liam R. Howlett
  2025-02-05 15:24                 ` Alice Ryhl
  0 siblings, 1 reply; 23+ messages in thread
From: Liam R. Howlett @ 2025-02-05 14:37 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Miguel Ojeda, Matthew Wilcox, Lorenzo Stoakes, Vlastimil Babka,
	John Hubbard, Andrew Morton, Greg Kroah-Hartman, Arnd Bergmann,
	Jann Horn, Suren Baghdasaryan, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, linux-kernel, linux-mm, rust-for-linux

* Alice Ryhl <aliceryhl@google.com> [250205 07:10]:
> On Tue, Feb 4, 2025 at 4:46 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
> > > > > > > +        let vma = unsafe { bindings::vma_lookup(self.mm.as_raw(), vma_addr) };
> > > > > > > +
> > > > > > > +        if vma.is_null() {
> > > > > > > +            None
> > > > > > > +        } else {
> > > > > > > +            // SAFETY: We just checked that a vma was found, so the pointer is valid. Furthermore,
> > > > > > > +            // the returned area will borrow from this read lock guard, so it can only be used
> > > > > > > +            // while the mmap read lock is still held.
> > > > > >
> > > > > > So We have complicated the locking of the vmas with rcu and per-vma
> > > > > > locking recently.  We are now able to look up and use a vma under the
> > > > > > rcu read lock.  Does this translate to rust model?
> > > > > >
> > > > > > I believe this is true in recent version of binder as well?
> > > > >
> > > > > Yes. The safety requirements of VmAreaRef is that you must hold the
> > > > > mmap read lock *or* the vma read lock while you have a VmAreaRef
> > > > > reference. This particular method achieves that requirement by holding
> > > > > the mmap read lock. But there is also a Rust lock_vma_under_rcu(), see
> > > > > patch 4 for that.
> > > >
> > > > Right, okay.  Thanks.  You can get the reference by only holding the rcu
> > > > read lock, but you should hold the vma lock to ensure that the vma
> > > > itself (and not just the pointer) is safe to use.
> > >
> > > Hmm... To modify the vma, you must hold the mmap *and* vma write lock,
> > > so holding the mmap read lock prevents mutations?
> >
> > Sorry, I think I confused things with my answer.  Your code is fine.
> > The phrasing of the "only be used while the mmap read lock is still
> > held" made me wonder about further clarification on the locking here
> > (because the locking is confusing).
> >
> > Yes, mmap read lock means there are no writers that can modify the vma.
> > Essentially, you are using the lock to ensure the entire vma space isn't
> > changed during your operation - which is heavier than just locking one
> > vma.
> 
> I could extend the safety comment like this:
> 
> SAFETY: We just checked that a vma was found, so the pointer is valid.
> Furthermore, the returned area will borrow from this read lock guard,
> so it can only be used while the mmap read lock is still held. This
> ensures that there are no writers because writers must hold both the
> mmap and vma write lock.

How about just changing the last part to:

Furthermore, the returned vma is still under the protection of the read
lock guard and can be used while the mmap read lock is still held.

Thanks,
Liam

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v13 2/8] mm: rust: add vm_area_struct methods that require read access
  2025-02-05 14:37               ` Liam R. Howlett
@ 2025-02-05 15:24                 ` Alice Ryhl
  2025-02-05 19:09                   ` Liam R. Howlett
  0 siblings, 1 reply; 23+ messages in thread
From: Alice Ryhl @ 2025-02-05 15:24 UTC (permalink / raw)
  To: Liam R. Howlett, Alice Ryhl, Miguel Ojeda, Matthew Wilcox,
	Lorenzo Stoakes, Vlastimil Babka, John Hubbard, Andrew Morton,
	Greg Kroah-Hartman, Arnd Bergmann, Jann Horn, Suren Baghdasaryan,
	Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Trevor Gross, linux-kernel,
	linux-mm, rust-for-linux

On Wed, Feb 5, 2025 at 3:38 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
>
> * Alice Ryhl <aliceryhl@google.com> [250205 07:10]:
> > On Tue, Feb 4, 2025 at 4:46 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
> > > > > > > > +        let vma = unsafe { bindings::vma_lookup(self.mm.as_raw(), vma_addr) };
> > > > > > > > +
> > > > > > > > +        if vma.is_null() {
> > > > > > > > +            None
> > > > > > > > +        } else {
> > > > > > > > +            // SAFETY: We just checked that a vma was found, so the pointer is valid. Furthermore,
> > > > > > > > +            // the returned area will borrow from this read lock guard, so it can only be used
> > > > > > > > +            // while the mmap read lock is still held.
> > > > > > >
> > > > > > > So We have complicated the locking of the vmas with rcu and per-vma
> > > > > > > locking recently.  We are now able to look up and use a vma under the
> > > > > > > rcu read lock.  Does this translate to rust model?
> > > > > > >
> > > > > > > I believe this is true in recent version of binder as well?
> > > > > >
> > > > > > Yes. The safety requirements of VmAreaRef is that you must hold the
> > > > > > mmap read lock *or* the vma read lock while you have a VmAreaRef
> > > > > > reference. This particular method achieves that requirement by holding
> > > > > > the mmap read lock. But there is also a Rust lock_vma_under_rcu(), see
> > > > > > patch 4 for that.
> > > > >
> > > > > Right, okay.  Thanks.  You can get the reference by only holding the rcu
> > > > > read lock, but you should hold the vma lock to ensure that the vma
> > > > > itself (and not just the pointer) is safe to use.
> > > >
> > > > Hmm... To modify the vma, you must hold the mmap *and* vma write lock,
> > > > so holding the mmap read lock prevents mutations?
> > >
> > > Sorry, I think I confused things with my answer.  Your code is fine.
> > > The phrasing of the "only be used while the mmap read lock is still
> > > held" made me wonder about further clarification on the locking here
> > > (because the locking is confusing).
> > >
> > > Yes, mmap read lock means there are no writers that can modify the vma.
> > > Essentially, you are using the lock to ensure the entire vma space isn't
> > > changed during your operation - which is heavier than just locking one
> > > vma.
> >
> > I could extend the safety comment like this:
> >
> > SAFETY: We just checked that a vma was found, so the pointer is valid.
> > Furthermore, the returned area will borrow from this read lock guard,
> > so it can only be used while the mmap read lock is still held. This
> > ensures that there are no writers because writers must hold both the
> > mmap and vma write lock.
>
> How about just changing the last part to:
>
> Furthermore, the returned vma is still under the protection of the read
> lock guard and can be used while the mmap read lock is still held.

Well, the important part here is that you can't do this:

let guard = mm.mmap_read_lock();
let vma = guard.vma_lookup(...)?;
drop(guard);
vma.foo();

since that would use the vma after the lock has been released. The
reason that the above is prevented is because `vma` borrows from
`guard`, so you can only use `vma` while `guard` is still valid.

Alice

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v13 2/8] mm: rust: add vm_area_struct methods that require read access
  2025-02-05 15:24                 ` Alice Ryhl
@ 2025-02-05 19:09                   ` Liam R. Howlett
  2025-02-05 19:12                     ` Alice Ryhl
  0 siblings, 1 reply; 23+ messages in thread
From: Liam R. Howlett @ 2025-02-05 19:09 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Miguel Ojeda, Matthew Wilcox, Lorenzo Stoakes, Vlastimil Babka,
	John Hubbard, Andrew Morton, Greg Kroah-Hartman, Arnd Bergmann,
	Jann Horn, Suren Baghdasaryan, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, linux-kernel, linux-mm, rust-for-linux

* Alice Ryhl <aliceryhl@google.com> [250205 10:24]:
> On Wed, Feb 5, 2025 at 3:38 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
> >
> > * Alice Ryhl <aliceryhl@google.com> [250205 07:10]:
> > > On Tue, Feb 4, 2025 at 4:46 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
> > > > > > > > > +        let vma = unsafe { bindings::vma_lookup(self.mm.as_raw(), vma_addr) };
> > > > > > > > > +
> > > > > > > > > +        if vma.is_null() {
> > > > > > > > > +            None
> > > > > > > > > +        } else {
> > > > > > > > > +            // SAFETY: We just checked that a vma was found, so the pointer is valid. Furthermore,
> > > > > > > > > +            // the returned area will borrow from this read lock guard, so it can only be used
> > > > > > > > > +            // while the mmap read lock is still held.
> > > > > > > >
> > > > > > > > So We have complicated the locking of the vmas with rcu and per-vma
> > > > > > > > locking recently.  We are now able to look up and use a vma under the
> > > > > > > > rcu read lock.  Does this translate to rust model?
> > > > > > > >
> > > > > > > > I believe this is true in recent version of binder as well?
> > > > > > >
> > > > > > > Yes. The safety requirements of VmAreaRef is that you must hold the
> > > > > > > mmap read lock *or* the vma read lock while you have a VmAreaRef
> > > > > > > reference. This particular method achieves that requirement by holding
> > > > > > > the mmap read lock. But there is also a Rust lock_vma_under_rcu(), see
> > > > > > > patch 4 for that.
> > > > > >
> > > > > > Right, okay.  Thanks.  You can get the reference by only holding the rcu
> > > > > > read lock, but you should hold the vma lock to ensure that the vma
> > > > > > itself (and not just the pointer) is safe to use.
> > > > >
> > > > > Hmm... To modify the vma, you must hold the mmap *and* vma write lock,
> > > > > so holding the mmap read lock prevents mutations?
> > > >
> > > > Sorry, I think I confused things with my answer.  Your code is fine.
> > > > The phrasing of the "only be used while the mmap read lock is still
> > > > held" made me wonder about further clarification on the locking here
> > > > (because the locking is confusing).
> > > >
> > > > Yes, mmap read lock means there are no writers that can modify the vma.
> > > > Essentially, you are using the lock to ensure the entire vma space isn't
> > > > changed during your operation - which is heavier than just locking one
> > > > vma.
> > >
> > > I could extend the safety comment like this:
> > >
> > > SAFETY: We just checked that a vma was found, so the pointer is valid.
> > > Furthermore, the returned area will borrow from this read lock guard,
> > > so it can only be used while the mmap read lock is still held. This
> > > ensures that there are no writers because writers must hold both the
> > > mmap and vma write lock.
> >
> > How about just changing the last part to:
> >
> > Furthermore, the returned vma is still under the protection of the read
> > lock guard and can be used while the mmap read lock is still held.
> 
> Well, the important part here is that you can't do this:
> 
> let guard = mm.mmap_read_lock();
> let vma = guard.vma_lookup(...)?;
> drop(guard);
> vma.foo();
> 
> since that would use the vma after the lock has been released. The
> reason that the above is prevented is because `vma` borrows from
> `guard`, so you can only use `vma` while `guard` is still valid.
> 

But it implies that this isn't valid:

 let guard = mm.mmap_read_lock();
 let vma = guard.vma_lookup(...)?;

 vma_lock(vma);

 drop(guard);
 vma.foo();

 vma_unlock(vma);

See mm/userfaultfd.c:uffd_lock_vma(), which falls back to mmap read lock
to do this if rcu lock + lock_vma_under_rcu() is unable to lock the vma.

Thanks,
Liam

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v13 2/8] mm: rust: add vm_area_struct methods that require read access
  2025-02-05 19:09                   ` Liam R. Howlett
@ 2025-02-05 19:12                     ` Alice Ryhl
  2025-02-05 19:22                       ` Liam R. Howlett
  0 siblings, 1 reply; 23+ messages in thread
From: Alice Ryhl @ 2025-02-05 19:12 UTC (permalink / raw)
  To: Liam R. Howlett, Alice Ryhl, Miguel Ojeda, Matthew Wilcox,
	Lorenzo Stoakes, Vlastimil Babka, John Hubbard, Andrew Morton,
	Greg Kroah-Hartman, Arnd Bergmann, Jann Horn, Suren Baghdasaryan,
	Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Trevor Gross, linux-kernel,
	linux-mm, rust-for-linux

On Wed, Feb 5, 2025 at 8:10 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
>
> * Alice Ryhl <aliceryhl@google.com> [250205 10:24]:
> > On Wed, Feb 5, 2025 at 3:38 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
> > >
> > > * Alice Ryhl <aliceryhl@google.com> [250205 07:10]:
> > > > On Tue, Feb 4, 2025 at 4:46 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
> > > > > > > > > > +        let vma = unsafe { bindings::vma_lookup(self.mm.as_raw(), vma_addr) };
> > > > > > > > > > +
> > > > > > > > > > +        if vma.is_null() {
> > > > > > > > > > +            None
> > > > > > > > > > +        } else {
> > > > > > > > > > +            // SAFETY: We just checked that a vma was found, so the pointer is valid. Furthermore,
> > > > > > > > > > +            // the returned area will borrow from this read lock guard, so it can only be used
> > > > > > > > > > +            // while the mmap read lock is still held.
> > > > > > > > >
> > > > > > > > > So We have complicated the locking of the vmas with rcu and per-vma
> > > > > > > > > locking recently.  We are now able to look up and use a vma under the
> > > > > > > > > rcu read lock.  Does this translate to rust model?
> > > > > > > > >
> > > > > > > > > I believe this is true in recent version of binder as well?
> > > > > > > >
> > > > > > > > Yes. The safety requirements of VmAreaRef is that you must hold the
> > > > > > > > mmap read lock *or* the vma read lock while you have a VmAreaRef
> > > > > > > > reference. This particular method achieves that requirement by holding
> > > > > > > > the mmap read lock. But there is also a Rust lock_vma_under_rcu(), see
> > > > > > > > patch 4 for that.
> > > > > > >
> > > > > > > Right, okay.  Thanks.  You can get the reference by only holding the rcu
> > > > > > > read lock, but you should hold the vma lock to ensure that the vma
> > > > > > > itself (and not just the pointer) is safe to use.
> > > > > >
> > > > > > Hmm... To modify the vma, you must hold the mmap *and* vma write lock,
> > > > > > so holding the mmap read lock prevents mutations?
> > > > >
> > > > > Sorry, I think I confused things with my answer.  Your code is fine.
> > > > > The phrasing of the "only be used while the mmap read lock is still
> > > > > held" made me wonder about further clarification on the locking here
> > > > > (because the locking is confusing).
> > > > >
> > > > > Yes, mmap read lock means there are no writers that can modify the vma.
> > > > > Essentially, you are using the lock to ensure the entire vma space isn't
> > > > > changed during your operation - which is heavier than just locking one
> > > > > vma.
> > > >
> > > > I could extend the safety comment like this:
> > > >
> > > > SAFETY: We just checked that a vma was found, so the pointer is valid.
> > > > Furthermore, the returned area will borrow from this read lock guard,
> > > > so it can only be used while the mmap read lock is still held. This
> > > > ensures that there are no writers because writers must hold both the
> > > > mmap and vma write lock.
> > >
> > > How about just changing the last part to:
> > >
> > > Furthermore, the returned vma is still under the protection of the read
> > > lock guard and can be used while the mmap read lock is still held.
> >
> > Well, the important part here is that you can't do this:
> >
> > let guard = mm.mmap_read_lock();
> > let vma = guard.vma_lookup(...)?;
> > drop(guard);
> > vma.foo();
> >
> > since that would use the vma after the lock has been released. The
> > reason that the above is prevented is because `vma` borrows from
> > `guard`, so you can only use `vma` while `guard` is still valid.
> >
>
> But it implies that this isn't valid:
>
>  let guard = mm.mmap_read_lock();
>  let vma = guard.vma_lookup(...)?;
>
>  vma_lock(vma);
>
>  drop(guard);
>  vma.foo();
>
>  vma_unlock(vma);
>
> See mm/userfaultfd.c:uffd_lock_vma(), which falls back to mmap read lock
> to do this if rcu lock + lock_vma_under_rcu() is unable to lock the vma.

This patchset does not have the functionality for doing that, but it's
definitely possible to add.

Alice

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v13 2/8] mm: rust: add vm_area_struct methods that require read access
  2025-02-05 19:12                     ` Alice Ryhl
@ 2025-02-05 19:22                       ` Liam R. Howlett
  2025-02-05 19:26                         ` Alice Ryhl
  0 siblings, 1 reply; 23+ messages in thread
From: Liam R. Howlett @ 2025-02-05 19:22 UTC (permalink / raw)
  To: Alice Ryhl
  Cc: Miguel Ojeda, Matthew Wilcox, Lorenzo Stoakes, Vlastimil Babka,
	John Hubbard, Andrew Morton, Greg Kroah-Hartman, Arnd Bergmann,
	Jann Horn, Suren Baghdasaryan, Alex Gaynor, Boqun Feng, Gary Guo,
	Björn Roy Baron, Benno Lossin, Andreas Hindborg,
	Trevor Gross, linux-kernel, linux-mm, rust-for-linux

* Alice Ryhl <aliceryhl@google.com> [250205 14:13]:
> On Wed, Feb 5, 2025 at 8:10 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
> >
> > * Alice Ryhl <aliceryhl@google.com> [250205 10:24]:
> > > On Wed, Feb 5, 2025 at 3:38 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
> > > >
> > > > * Alice Ryhl <aliceryhl@google.com> [250205 07:10]:
> > > > > On Tue, Feb 4, 2025 at 4:46 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
> > > > > > > > > > > +        let vma = unsafe { bindings::vma_lookup(self.mm.as_raw(), vma_addr) };
> > > > > > > > > > > +
> > > > > > > > > > > +        if vma.is_null() {
> > > > > > > > > > > +            None
> > > > > > > > > > > +        } else {
> > > > > > > > > > > +            // SAFETY: We just checked that a vma was found, so the pointer is valid. Furthermore,
> > > > > > > > > > > +            // the returned area will borrow from this read lock guard, so it can only be used
> > > > > > > > > > > +            // while the mmap read lock is still held.
> > > > > > > > > >
> > > > > > > > > > So We have complicated the locking of the vmas with rcu and per-vma
> > > > > > > > > > locking recently.  We are now able to look up and use a vma under the
> > > > > > > > > > rcu read lock.  Does this translate to rust model?
> > > > > > > > > >
> > > > > > > > > > I believe this is true in recent version of binder as well?
> > > > > > > > >
> > > > > > > > > Yes. The safety requirements of VmAreaRef is that you must hold the
> > > > > > > > > mmap read lock *or* the vma read lock while you have a VmAreaRef
> > > > > > > > > reference. This particular method achieves that requirement by holding
> > > > > > > > > the mmap read lock. But there is also a Rust lock_vma_under_rcu(), see
> > > > > > > > > patch 4 for that.
> > > > > > > >
> > > > > > > > Right, okay.  Thanks.  You can get the reference by only holding the rcu
> > > > > > > > read lock, but you should hold the vma lock to ensure that the vma
> > > > > > > > itself (and not just the pointer) is safe to use.
> > > > > > >
> > > > > > > Hmm... To modify the vma, you must hold the mmap *and* vma write lock,
> > > > > > > so holding the mmap read lock prevents mutations?
> > > > > >
> > > > > > Sorry, I think I confused things with my answer.  Your code is fine.
> > > > > > The phrasing of the "only be used while the mmap read lock is still
> > > > > > held" made me wonder about further clarification on the locking here
> > > > > > (because the locking is confusing).
> > > > > >
> > > > > > Yes, mmap read lock means there are no writers that can modify the vma.
> > > > > > Essentially, you are using the lock to ensure the entire vma space isn't
> > > > > > changed during your operation - which is heavier than just locking one
> > > > > > vma.
> > > > >
> > > > > I could extend the safety comment like this:
> > > > >
> > > > > SAFETY: We just checked that a vma was found, so the pointer is valid.
> > > > > Furthermore, the returned area will borrow from this read lock guard,
> > > > > so it can only be used while the mmap read lock is still held. This
> > > > > ensures that there are no writers because writers must hold both the
> > > > > mmap and vma write lock.
> > > >
> > > > How about just changing the last part to:
> > > >
> > > > Furthermore, the returned vma is still under the protection of the read
> > > > lock guard and can be used while the mmap read lock is still held.
> > >
> > > Well, the important part here is that you can't do this:
> > >
> > > let guard = mm.mmap_read_lock();
> > > let vma = guard.vma_lookup(...)?;
> > > drop(guard);
> > > vma.foo();
> > >
> > > since that would use the vma after the lock has been released. The
> > > reason that the above is prevented is because `vma` borrows from
> > > `guard`, so you can only use `vma` while `guard` is still valid.
> > >
> >
> > But it implies that this isn't valid:
> >
> >  let guard = mm.mmap_read_lock();
> >  let vma = guard.vma_lookup(...)?;
> >
> >  vma_lock(vma);
> >
> >  drop(guard);
> >  vma.foo();
> >
> >  vma_unlock(vma);
> >
> > See mm/userfaultfd.c:uffd_lock_vma(), which falls back to mmap read lock
> > to do this if rcu lock + lock_vma_under_rcu() is unable to lock the vma.
> 
> This patchset does not have the functionality for doing that, but it's
> definitely possible to add.
>

I don't think that's necessary right now.  It's just that I read that
comment and it seemed to imply something that isn't strictly true with
the "only valid" part.  I think?  Is rust doing something that makes it
true?

Thanks,
Liam


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v13 2/8] mm: rust: add vm_area_struct methods that require read access
  2025-02-05 19:22                       ` Liam R. Howlett
@ 2025-02-05 19:26                         ` Alice Ryhl
  0 siblings, 0 replies; 23+ messages in thread
From: Alice Ryhl @ 2025-02-05 19:26 UTC (permalink / raw)
  To: Liam R. Howlett, Alice Ryhl, Miguel Ojeda, Matthew Wilcox,
	Lorenzo Stoakes, Vlastimil Babka, John Hubbard, Andrew Morton,
	Greg Kroah-Hartman, Arnd Bergmann, Jann Horn, Suren Baghdasaryan,
	Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron,
	Benno Lossin, Andreas Hindborg, Trevor Gross, linux-kernel,
	linux-mm, rust-for-linux

On Wed, Feb 5, 2025 at 8:22 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
>
> * Alice Ryhl <aliceryhl@google.com> [250205 14:13]:
> > On Wed, Feb 5, 2025 at 8:10 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
> > >
> > > * Alice Ryhl <aliceryhl@google.com> [250205 10:24]:
> > > > On Wed, Feb 5, 2025 at 3:38 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
> > > > >
> > > > > * Alice Ryhl <aliceryhl@google.com> [250205 07:10]:
> > > > > > On Tue, Feb 4, 2025 at 4:46 PM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
> > > > > > > > > > > > +        let vma = unsafe { bindings::vma_lookup(self.mm.as_raw(), vma_addr) };
> > > > > > > > > > > > +
> > > > > > > > > > > > +        if vma.is_null() {
> > > > > > > > > > > > +            None
> > > > > > > > > > > > +        } else {
> > > > > > > > > > > > +            // SAFETY: We just checked that a vma was found, so the pointer is valid. Furthermore,
> > > > > > > > > > > > +            // the returned area will borrow from this read lock guard, so it can only be used
> > > > > > > > > > > > +            // while the mmap read lock is still held.
> > > > > > > > > > >
> > > > > > > > > > > So We have complicated the locking of the vmas with rcu and per-vma
> > > > > > > > > > > locking recently.  We are now able to look up and use a vma under the
> > > > > > > > > > > rcu read lock.  Does this translate to rust model?
> > > > > > > > > > >
> > > > > > > > > > > I believe this is true in recent version of binder as well?
> > > > > > > > > >
> > > > > > > > > > Yes. The safety requirements of VmAreaRef is that you must hold the
> > > > > > > > > > mmap read lock *or* the vma read lock while you have a VmAreaRef
> > > > > > > > > > reference. This particular method achieves that requirement by holding
> > > > > > > > > > the mmap read lock. But there is also a Rust lock_vma_under_rcu(), see
> > > > > > > > > > patch 4 for that.
> > > > > > > > >
> > > > > > > > > Right, okay.  Thanks.  You can get the reference by only holding the rcu
> > > > > > > > > read lock, but you should hold the vma lock to ensure that the vma
> > > > > > > > > itself (and not just the pointer) is safe to use.
> > > > > > > >
> > > > > > > > Hmm... To modify the vma, you must hold the mmap *and* vma write lock,
> > > > > > > > so holding the mmap read lock prevents mutations?
> > > > > > >
> > > > > > > Sorry, I think I confused things with my answer.  Your code is fine.
> > > > > > > The phrasing of the "only be used while the mmap read lock is still
> > > > > > > held" made me wonder about further clarification on the locking here
> > > > > > > (because the locking is confusing).
> > > > > > >
> > > > > > > Yes, mmap read lock means there are no writers that can modify the vma.
> > > > > > > Essentially, you are using the lock to ensure the entire vma space isn't
> > > > > > > changed during your operation - which is heavier than just locking one
> > > > > > > vma.
> > > > > >
> > > > > > I could extend the safety comment like this:
> > > > > >
> > > > > > SAFETY: We just checked that a vma was found, so the pointer is valid.
> > > > > > Furthermore, the returned area will borrow from this read lock guard,
> > > > > > so it can only be used while the mmap read lock is still held. This
> > > > > > ensures that there are no writers because writers must hold both the
> > > > > > mmap and vma write lock.
> > > > >
> > > > > How about just changing the last part to:
> > > > >
> > > > > Furthermore, the returned vma is still under the protection of the read
> > > > > lock guard and can be used while the mmap read lock is still held.
> > > >
> > > > Well, the important part here is that you can't do this:
> > > >
> > > > let guard = mm.mmap_read_lock();
> > > > let vma = guard.vma_lookup(...)?;
> > > > drop(guard);
> > > > vma.foo();
> > > >
> > > > since that would use the vma after the lock has been released. The
> > > > reason that the above is prevented is because `vma` borrows from
> > > > `guard`, so you can only use `vma` while `guard` is still valid.
> > > >
> > >
> > > But it implies that this isn't valid:
> > >
> > >  let guard = mm.mmap_read_lock();
> > >  let vma = guard.vma_lookup(...)?;
> > >
> > >  vma_lock(vma);
> > >
> > >  drop(guard);
> > >  vma.foo();
> > >
> > >  vma_unlock(vma);
> > >
> > > See mm/userfaultfd.c:uffd_lock_vma(), which falls back to mmap read lock
> > > to do this if rcu lock + lock_vma_under_rcu() is unable to lock the vma.
> >
> > This patchset does not have the functionality for doing that, but it's
> > definitely possible to add.
> >
>
> I don't think that's necessary right now.  It's just that I read that
> comment and it seemed to imply something that isn't strictly true with
> the "only valid" part.  I think?  Is rust doing something that makes it
> true?

The code would look like:

let guard = mm.mmap_read_lock();
let vma = guard.vma_lookup(...)?;
let vma_guard = vma.vma_lock();
drop(guard); // this is mmap_read_unlock()
vma_guard.foo();
drop(vma_guard); // this is vma_unlock()

So you couldn't use vma after drop(guard), but vma_guard would be
usable. (And of course the drop calls also happen automatically at the
end of the scope if you do not drop it explicitly.)

Alice

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2025-02-05 19:26 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-03 12:14 [PATCH v13 0/8] Rust support for mm_struct, vm_area_struct, and mmap Alice Ryhl
2025-02-03 12:14 ` [PATCH v13 1/8] mm: rust: add abstraction for struct mm_struct Alice Ryhl
2025-02-03 12:14 ` [PATCH v13 2/8] mm: rust: add vm_area_struct methods that require read access Alice Ryhl
2025-02-03 15:44   ` Liam R. Howlett
2025-02-04 12:45     ` Alice Ryhl
2025-02-04 14:55       ` Liam R. Howlett
2025-02-04 15:05         ` Alice Ryhl
2025-02-04 15:46           ` Liam R. Howlett
2025-02-05 12:10             ` Alice Ryhl
2025-02-05 14:37               ` Liam R. Howlett
2025-02-05 15:24                 ` Alice Ryhl
2025-02-05 19:09                   ` Liam R. Howlett
2025-02-05 19:12                     ` Alice Ryhl
2025-02-05 19:22                       ` Liam R. Howlett
2025-02-05 19:26                         ` Alice Ryhl
2025-02-04 12:46     ` Alice Ryhl
2025-02-04 14:58       ` Liam R. Howlett
2025-02-03 12:14 ` [PATCH v13 3/8] mm: rust: add vm_insert_page Alice Ryhl
2025-02-03 12:14 ` [PATCH v13 4/8] mm: rust: add lock_vma_under_rcu Alice Ryhl
2025-02-03 12:14 ` [PATCH v13 5/8] mm: rust: add mmput_async support Alice Ryhl
2025-02-03 12:14 ` [PATCH v13 6/8] mm: rust: add VmAreaNew for f_ops->mmap() Alice Ryhl
2025-02-03 12:14 ` [PATCH v13 7/8] rust: miscdevice: add mmap support Alice Ryhl
2025-02-03 12:14 ` [PATCH v13 8/8] task: rust: rework how current is accessed Alice Ryhl

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).