* [PATCH] rust_binder: use lock_vma_under_rcu() in shrinker
@ 2026-05-07 11:07 Alice Ryhl
2026-05-11 13:19 ` Lorenzo Stoakes
0 siblings, 1 reply; 3+ messages in thread
From: Alice Ryhl @ 2026-05-07 11:07 UTC (permalink / raw)
To: Greg Kroah-Hartman, Carlos Llamas
Cc: Lorenzo Stoakes, Liam R. Howlett, Miguel Ojeda, Boqun Feng,
Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg,
Alice Ryhl, Trevor Gross, Danilo Krummrich, linux-mm,
rust-for-linux, linux-kernel
The shrinker callback currently uses the mmap read trylock operation to
attempt to access the vma, but it's generally better to only lock the
vma instead of the whole mmap when you can.
When lock_vma_under_rcu() fails, there is no reason to lock the mmap
lock instead because it's already a trylock operation that is allowed to
fail.
Signed-off-by: Alice Ryhl <aliceryhl@google.com>
---
drivers/android/binder/page_range.rs | 24 +++++++++++-------------
1 file changed, 11 insertions(+), 13 deletions(-)
diff --git a/drivers/android/binder/page_range.rs b/drivers/android/binder/page_range.rs
index e54a90e62402..e82a5523804f 100644
--- a/drivers/android/binder/page_range.rs
+++ b/drivers/android/binder/page_range.rs
@@ -705,7 +705,7 @@ fn drop(self: Pin<&mut Self>) {
let page;
let page_index;
let mm;
- let mmap_read;
+ let vma_read;
let mm_mutex;
let vma_addr;
let range_ptr;
@@ -728,17 +728,18 @@ fn drop(self: Pin<&mut Self>) {
None => return LRU_SKIP,
};
- mmap_read = match mm.mmap_read_trylock() {
- Some(guard) => guard,
- None => return LRU_SKIP,
- };
-
// We can't lock it normally here, since we hold the lru lock.
let inner = match range.lock.try_lock() {
Some(inner) => inner,
None => return LRU_SKIP,
};
+ vma_addr = inner.vma_addr;
+ vma_read = match mm.lock_vma_under_rcu(vma_addr) {
+ Some(guard) => guard,
+ None => return LRU_SKIP,
+ };
+
// SAFETY: The item is in this lru list, so it's okay to remove it.
unsafe { bindings::list_lru_isolate(lru, item) };
@@ -751,7 +752,6 @@ fn drop(self: Pin<&mut Self>) {
// `zap_page_range` before we release the mmap lock, so `use_page_slow` will not be able to
// insert a new page until after our call to `zap_page_range`.
page = unsafe { PageInfo::take_page(info) };
- vma_addr = inner.vma_addr;
// From this point on, we don't access this PageInfo or ShrinkablePageRange again, because
// they can be freed at any point after we unlock `lru_lock`. This is with the exception of
@@ -761,14 +761,12 @@ fn drop(self: Pin<&mut Self>) {
// SAFETY: The lru lock is locked when this method is called.
unsafe { bindings::spin_unlock(&raw mut (*lru).lock) };
- if let Some(unchecked_vma) = mmap_read.vma_lookup(vma_addr) {
- if let Some(vma) = check_vma(unchecked_vma, range_ptr) {
- let user_page_addr = vma_addr + (page_index << PAGE_SHIFT);
- vma.zap_vma_range(user_page_addr, PAGE_SIZE);
- }
+ if let Some(vma) = check_vma(&vma_read, range_ptr) {
+ let user_page_addr = vma_addr + (page_index << PAGE_SHIFT);
+ vma.zap_vma_range(user_page_addr, PAGE_SIZE);
}
- drop(mmap_read);
+ drop(vma_read);
drop(mm_mutex);
drop(mm);
drop(page);
---
base-commit: 7fd2df204f342fc17d1a0bfcd474b24232fb0f32
change-id: 20260507-binder-shrinker-lockvma-51ff7d621f25
Best regards,
--
Alice Ryhl <aliceryhl@google.com>
^ permalink raw reply related [flat|nested] 3+ messages in thread* Re: [PATCH] rust_binder: use lock_vma_under_rcu() in shrinker
2026-05-07 11:07 [PATCH] rust_binder: use lock_vma_under_rcu() in shrinker Alice Ryhl
@ 2026-05-11 13:19 ` Lorenzo Stoakes
2026-05-12 8:56 ` Alice Ryhl
0 siblings, 1 reply; 3+ messages in thread
From: Lorenzo Stoakes @ 2026-05-11 13:19 UTC (permalink / raw)
To: Alice Ryhl
Cc: Greg Kroah-Hartman, Carlos Llamas, Liam R. Howlett, Miguel Ojeda,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Trevor Gross, Danilo Krummrich, linux-mm,
rust-for-linux, linux-kernel
On Thu, May 07, 2026 at 11:07:47AM +0000, Alice Ryhl wrote:
> The shrinker callback currently uses the mmap read trylock operation to
> attempt to access the vma, but it's generally better to only lock the
> vma instead of the whole mmap when you can.
>
> When lock_vma_under_rcu() fails, there is no reason to lock the mmap
> lock instead because it's already a trylock operation that is allowed to
> fail.
>
> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
This seems similar to Dave's patch [0], not sure if it was inspired by that? :)
[0]:https://lore.kernel.org/all/20260429181957.7511C256@davehans-spike.ostc.intel.com/
In any case the general approach seems sane to me, as rust code I can't really
review it properly, but aside from the comment below (presumably that's fine) it
conceptually LGTM so:
Acked-by: Lorenzo Stoakes <ljs@kernel.org>
> ---
> drivers/android/binder/page_range.rs | 24 +++++++++++-------------
> 1 file changed, 11 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/android/binder/page_range.rs b/drivers/android/binder/page_range.rs
> index e54a90e62402..e82a5523804f 100644
> --- a/drivers/android/binder/page_range.rs
> +++ b/drivers/android/binder/page_range.rs
> @@ -705,7 +705,7 @@ fn drop(self: Pin<&mut Self>) {
> let page;
> let page_index;
> let mm;
> - let mmap_read;
> + let vma_read;
> let mm_mutex;
> let vma_addr;
> let range_ptr;
> @@ -728,17 +728,18 @@ fn drop(self: Pin<&mut Self>) {
> None => return LRU_SKIP,
> };
>
> - mmap_read = match mm.mmap_read_trylock() {
> - Some(guard) => guard,
> - None => return LRU_SKIP,
> - };
> -
> // We can't lock it normally here, since we hold the lru lock.
> let inner = match range.lock.try_lock() {
> Some(inner) => inner,
> None => return LRU_SKIP,
> };
>
> + vma_addr = inner.vma_addr;
> + vma_read = match mm.lock_vma_under_rcu(vma_addr) {
> + Some(guard) => guard,
> + None => return LRU_SKIP,
> + };
> +
One question here - are we good to do this _after_ locking the 'inner' lock
above?
> // SAFETY: The item is in this lru list, so it's okay to remove it.
> unsafe { bindings::list_lru_isolate(lru, item) };
>
> @@ -751,7 +752,6 @@ fn drop(self: Pin<&mut Self>) {
> // `zap_page_range` before we release the mmap lock, so `use_page_slow` will not be able to
> // insert a new page until after our call to `zap_page_range`.
> page = unsafe { PageInfo::take_page(info) };
> - vma_addr = inner.vma_addr;
>
> // From this point on, we don't access this PageInfo or ShrinkablePageRange again, because
> // they can be freed at any point after we unlock `lru_lock`. This is with the exception of
> @@ -761,14 +761,12 @@ fn drop(self: Pin<&mut Self>) {
> // SAFETY: The lru lock is locked when this method is called.
> unsafe { bindings::spin_unlock(&raw mut (*lru).lock) };
>
> - if let Some(unchecked_vma) = mmap_read.vma_lookup(vma_addr) {
> - if let Some(vma) = check_vma(unchecked_vma, range_ptr) {
> - let user_page_addr = vma_addr + (page_index << PAGE_SHIFT);
> - vma.zap_vma_range(user_page_addr, PAGE_SIZE);
> - }
> + if let Some(vma) = check_vma(&vma_read, range_ptr) {
> + let user_page_addr = vma_addr + (page_index << PAGE_SHIFT);
> + vma.zap_vma_range(user_page_addr, PAGE_SIZE);
> }
>
> - drop(mmap_read);
> + drop(vma_read);
> drop(mm_mutex);
> drop(mm);
> drop(page);
>
> ---
> base-commit: 7fd2df204f342fc17d1a0bfcd474b24232fb0f32
> change-id: 20260507-binder-shrinker-lockvma-51ff7d621f25
>
> Best regards,
> --
> Alice Ryhl <aliceryhl@google.com>
>
^ permalink raw reply [flat|nested] 3+ messages in thread* Re: [PATCH] rust_binder: use lock_vma_under_rcu() in shrinker
2026-05-11 13:19 ` Lorenzo Stoakes
@ 2026-05-12 8:56 ` Alice Ryhl
0 siblings, 0 replies; 3+ messages in thread
From: Alice Ryhl @ 2026-05-12 8:56 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: Greg Kroah-Hartman, Carlos Llamas, Liam R. Howlett, Miguel Ojeda,
Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin,
Andreas Hindborg, Trevor Gross, Danilo Krummrich, linux-mm,
rust-for-linux, linux-kernel
On Mon, May 11, 2026 at 02:19:42PM +0100, Lorenzo Stoakes wrote:
> On Thu, May 07, 2026 at 11:07:47AM +0000, Alice Ryhl wrote:
> > The shrinker callback currently uses the mmap read trylock operation to
> > attempt to access the vma, but it's generally better to only lock the
> > vma instead of the whole mmap when you can.
> >
> > When lock_vma_under_rcu() fails, there is no reason to lock the mmap
> > lock instead because it's already a trylock operation that is allowed to
> > fail.
> >
> > Signed-off-by: Alice Ryhl <aliceryhl@google.com>
>
> This seems similar to Dave's patch [0], not sure if it was inspired by that? :)
>
> [0]:https://lore.kernel.org/all/20260429181957.7511C256@davehans-spike.ostc.intel.com/
I was reminded by that change, but it's been on my list since commit
a0b9b0f1433c ("rust_binder: use lock_vma_under_rcu() in
use_page_slow()").
> In any case the general approach seems sane to me, as rust code I can't really
> review it properly, but aside from the comment below (presumably that's fine) it
> conceptually LGTM so:
>
> Acked-by: Lorenzo Stoakes <ljs@kernel.org>
>
> > ---
> > drivers/android/binder/page_range.rs | 24 +++++++++++-------------
> > 1 file changed, 11 insertions(+), 13 deletions(-)
> >
> > diff --git a/drivers/android/binder/page_range.rs b/drivers/android/binder/page_range.rs
> > index e54a90e62402..e82a5523804f 100644
> > --- a/drivers/android/binder/page_range.rs
> > +++ b/drivers/android/binder/page_range.rs
> > @@ -705,7 +705,7 @@ fn drop(self: Pin<&mut Self>) {
> > let page;
> > let page_index;
> > let mm;
> > - let mmap_read;
> > + let vma_read;
> > let mm_mutex;
> > let vma_addr;
> > let range_ptr;
> > @@ -728,17 +728,18 @@ fn drop(self: Pin<&mut Self>) {
> > None => return LRU_SKIP,
> > };
> >
> > - mmap_read = match mm.mmap_read_trylock() {
> > - Some(guard) => guard,
> > - None => return LRU_SKIP,
> > - };
> > -
> > // We can't lock it normally here, since we hold the lru lock.
> > let inner = match range.lock.try_lock() {
> > Some(inner) => inner,
> > None => return LRU_SKIP,
> > };
> >
> > + vma_addr = inner.vma_addr;
> > + vma_read = match mm.lock_vma_under_rcu(vma_addr) {
> > + Some(guard) => guard,
> > + None => return LRU_SKIP,
> > + };
> > +
>
> One question here - are we good to do this _after_ locking the 'inner' lock
> above?
Well, it's a spinlock so unless lock_vma_under_rcu() can sleep it should
be fine. Though we also hold *another* spinlock here, so if it can
sleep, we couldn't take it before `inner` either.
Alice
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2026-05-12 8:56 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-07 11:07 [PATCH] rust_binder: use lock_vma_under_rcu() in shrinker Alice Ryhl
2026-05-11 13:19 ` Lorenzo Stoakes
2026-05-12 8:56 ` Alice Ryhl
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox