From: Carlos Llamas <cmllamas@google.com>
To: Suren Baghdasaryan <surenb@google.com>
Cc: "Greg Kroah-Hartman" <gregkh@linuxfoundation.org>,
"Arve Hjønnevåg" <arve@android.com>,
"Todd Kjos" <tkjos@android.com>,
"Martijn Coenen" <maco@android.com>,
"Joel Fernandes" <joel@joelfernandes.org>,
"Christian Brauner" <brauner@kernel.org>,
linux-kernel@vger.kernel.org, kernel-team@android.com,
"Nhat Pham" <nphamcs@gmail.com>,
"Johannes Weiner" <hannes@cmpxchg.org>,
"Barry Song" <v-songbaohua@oppo.com>,
"Hillf Danton" <hdanton@sina.com>,
"Lorenzo Stoakes" <lorenzo.stoakes@oracle.com>
Subject: Re: [PATCH v2 8/8] binder: use per-vma lock in page installation
Date: Thu, 7 Nov 2024 17:55:05 +0000 [thread overview]
Message-ID: <Zyz--bjvkVXngc5U@google.com> (raw)
In-Reply-To: <CAJuCfpHM8J0S4dXhxmVuFSTUV0czg1CTFpf_C=k7M57T9qh-VQ@mail.gmail.com>
On Thu, Nov 07, 2024 at 08:16:39AM -0800, Suren Baghdasaryan wrote:
> On Wed, Nov 6, 2024 at 8:03 PM Carlos Llamas <cmllamas@google.com> wrote:
> >
> > Use per-vma locking for concurrent page installations, this minimizes
> > contention with unrelated vmas improving performance. The mmap_lock is
> > still acquired when needed though, e.g. before get_user_pages_remote().
> >
> > Many thanks to Barry Song who posted a similar approach [1].
> >
> > Link: https://lore.kernel.org/all/20240902225009.34576-1-21cnbao@gmail.com/ [1]
> > Cc: Nhat Pham <nphamcs@gmail.com>
> > Cc: Johannes Weiner <hannes@cmpxchg.org>
> > Cc: Barry Song <v-songbaohua@oppo.com>
> > Cc: Suren Baghdasaryan <surenb@google.com>
> > Cc: Hillf Danton <hdanton@sina.com>
> > Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> > Signed-off-by: Carlos Llamas <cmllamas@google.com>
> > ---
> > drivers/android/binder_alloc.c | 85 +++++++++++++++++++++++-----------
> > 1 file changed, 57 insertions(+), 28 deletions(-)
> >
> > diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
> > index 814435a2601a..debfa541e01b 100644
> > --- a/drivers/android/binder_alloc.c
> > +++ b/drivers/android/binder_alloc.c
> > @@ -233,6 +233,56 @@ static inline bool binder_alloc_is_mapped(struct binder_alloc *alloc)
> > return smp_load_acquire(&alloc->mapped);
> > }
> >
> > +static struct page *binder_page_lookup(struct mm_struct *mm,
>
> Maybe pass "struct binder_alloc" in both binder_page_lookup() and
> binder_page_insert()?
I'm not sure this is worth it though. Yeah, it would match with
binder_page_insert() nicely, but also there is no usage for alloc in
binder_page_lookup(). It's only purpose would be to access the mm:
static struct page *binder_page_lookup(struct binder_alloc *alloc,
unsigned long addr)
{
struct mm_struct *mm = alloc->mm;
If you think this is cleaner I really don't mind adding it for v3.
> I like how previous code stabilized mm with mmget_not_zero() once vs
> now binder_page_lookup() and binder_page_insert() have to mmget/mmput
> individually. Not a big deal but looked cleaner.
Sure, I can factor this out (the way it was in v1).
>
> > + unsigned long addr)
> > +{
> > + struct page *page;
> > + long ret;
> > +
> > + if (!mmget_not_zero(mm))
> > + return NULL;
> > +
> > + mmap_read_lock(mm);
> > + ret = get_user_pages_remote(mm, addr, 1, 0, &page, NULL);
> > + mmap_read_unlock(mm);
> > + mmput_async(mm);
> > +
> > + return ret > 0 ? page : NULL;
> > +}
> > +
> > +static int binder_page_insert(struct binder_alloc *alloc,
> > + unsigned long addr,
> > + struct page *page)
> > +{
> > + struct mm_struct *mm = alloc->mm;
> > + struct vm_area_struct *vma;
> > + int ret = -ESRCH;
> > +
> > + if (!mmget_not_zero(mm))
> > + return -ESRCH;
> > +
> > + /* attempt per-vma lock first */
> > + vma = lock_vma_under_rcu(mm, addr);
> > + if (!vma)
> > + goto lock_mmap;
> > +
> > + if (binder_alloc_is_mapped(alloc))
>
> I don't think you need this check here. lock_vma_under_rcu() ensures
> that the VMA was not detached from the tree after locking the VMA, so
> if you got a VMA it's in the tree and it can't be removed (because
> it's locked). remove_vma()->vma_close()->vma->vm_ops->close() is
> called after VMA gets detached from the tree and that won't happen
> while VMA is locked. So, if lock_vma_under_rcu() returns a VMA,
> binder_alloc_is_mapped() has to always return true. A WARN_ON() check
> here to ensure that might be a better option.
Yes we are guaranteed to have _a_ non-isolated vma. However, the check
validates that it's the _expected_ vma. IIUC, our vma could have been
unmapped (clearing alloc->mapped) and a _new_ unrelated vma could have
gotten the same address space assigned?
The binder_alloc_is_mapped() checks if the vma belongs to binder. This
reminds me, I should also check this for get_user_pages_remote().
>
> > + ret = vm_insert_page(vma, addr, page);
> > + vma_end_read(vma);
> > + goto done;
>
> I think the code would be more readable without these jumps:
>
> vma = lock_vma_under_rcu(mm, addr);
> if (vma) {
> if (!WARN_ON(!binder_alloc_is_mapped(alloc)))
> ret = vm_insert_page(vma, addr, page);
> vma_end_read(vma);
> } else {
> /* fall back to mmap_lock */
> mmap_read_lock(mm);
> vma = vma_lookup(mm, addr);
> if (vma && binder_alloc_is_mapped(alloc))
> ret = vm_insert_page(vma, addr, page);
> mmap_read_unlock(mm);
> }
> mmput_async(mm);
> return ret;
Ok. I'm thinking with mmput_async() being factored out, I'll add an
early return. e.g.:
vma = lock_vma_under_rcu(mm, addr);
if (vma) {
if (binder_alloc_is_mapped(alloc))
ret = vm_insert_page(vma, addr, page);
vma_end_read(vma);
return ret;
}
/* fall back to mmap_lock */
mmap_read_lock(mm);
[...]
Thanks,
Carlos Llamas
next prev parent reply other threads:[~2024-11-07 17:55 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-11-07 4:02 [PATCH v2 0/8] binder: faster page installations Carlos Llamas
2024-11-07 4:02 ` [PATCH v2 1/8] Revert "binder: switch alloc->mutex to spinlock_t" Carlos Llamas
2024-11-07 8:56 ` Greg Kroah-Hartman
2024-11-07 14:30 ` Carlos Llamas
2024-11-07 4:02 ` [PATCH v2 2/8] binder: concurrent page installation Carlos Llamas
2024-11-07 15:10 ` Suren Baghdasaryan
2024-11-07 15:31 ` Carlos Llamas
2024-11-07 4:02 ` [PATCH v2 3/8] binder: select correct nid for pages in LRU Carlos Llamas
2024-11-07 4:02 ` [PATCH v2 4/8] binder: remove struct binder_lru_page Carlos Llamas
2024-11-07 4:02 ` [PATCH v2 5/8] binder: use alloc->mapped to save the vma state Carlos Llamas
2024-11-07 4:02 ` [PATCH v2 6/8] binder: remove cached alloc->vma pointer Carlos Llamas
2024-11-07 15:33 ` Suren Baghdasaryan
2024-11-07 4:02 ` [PATCH v2 7/8] binder: rename alloc->buffer to vm_start Carlos Llamas
2024-11-07 4:02 ` [PATCH v2 8/8] binder: use per-vma lock in page installation Carlos Llamas
2024-11-07 16:16 ` Suren Baghdasaryan
2024-11-07 17:55 ` Carlos Llamas [this message]
2024-11-07 18:04 ` Suren Baghdasaryan
2024-11-07 18:19 ` Carlos Llamas
2024-11-07 18:27 ` Suren Baghdasaryan
2024-11-07 18:52 ` Suren Baghdasaryan
2024-11-07 19:33 ` Carlos Llamas
2024-11-07 19:40 ` Carlos Llamas
2024-11-07 16:18 ` [PATCH v2 0/8] binder: faster page installations Suren Baghdasaryan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Zyz--bjvkVXngc5U@google.com \
--to=cmllamas@google.com \
--cc=arve@android.com \
--cc=brauner@kernel.org \
--cc=gregkh@linuxfoundation.org \
--cc=hannes@cmpxchg.org \
--cc=hdanton@sina.com \
--cc=joel@joelfernandes.org \
--cc=kernel-team@android.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=maco@android.com \
--cc=nphamcs@gmail.com \
--cc=surenb@google.com \
--cc=tkjos@android.com \
--cc=v-songbaohua@oppo.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox