linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Suren Baghdasaryan <surenb@google.com>
To: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: akpm@linux-foundation.org, peterz@infradead.org,
	willy@infradead.org,  liam.howlett@oracle.com,
	david.laight.linux@gmail.com, mhocko@suse.com,  vbabka@suse.cz,
	hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com,
	 mgorman@techsingularity.net, david@redhat.com,
	peterx@redhat.com,  oleg@redhat.com, dave@stgolabs.net,
	paulmck@kernel.org, brauner@kernel.org,  dhowells@redhat.com,
	hdanton@sina.com, hughd@google.com,  lokeshgidra@google.com,
	minchan@google.com, jannh@google.com,  shakeel.butt@linux.dev,
	souravpanda@google.com, pasha.tatashin@soleen.com,
	 klarasmodin@gmail.com, richard.weiyang@gmail.com,
	corbet@lwn.net,  linux-doc@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org,  kernel-team@android.com
Subject: Re: [PATCH v9 04/17] mm: introduce vma_iter_store_attached() to use with attached vmas
Date: Mon, 13 Jan 2025 11:09:25 -0800	[thread overview]
Message-ID: <CAJuCfpFXwX+g0rCXAB_8s61VheOJZCBTSk1hyqrSWxqMPrE7MQ@mail.gmail.com> (raw)
In-Reply-To: <640fee1d-e76b-4aca-8975-f6bd4f3279d9@lucifer.local>

On Mon, Jan 13, 2025 at 8:48 AM Lorenzo Stoakes
<lorenzo.stoakes@oracle.com> wrote:
>
> On Mon, Jan 13, 2025 at 08:31:45AM -0800, Suren Baghdasaryan wrote:
> > On Mon, Jan 13, 2025 at 3:58 AM Lorenzo Stoakes
> > <lorenzo.stoakes@oracle.com> wrote:
> > >
> > > On Fri, Jan 10, 2025 at 08:25:51PM -0800, Suren Baghdasaryan wrote:
> > > > vma_iter_store() functions can be used both when adding a new vma and
> > > > when updating an existing one. However for existing ones we do not need
> > > > to mark them attached as they are already marked that way. Introduce
> > > > vma_iter_store_attached() to be used with already attached vmas.
> > >
> > > OK I guess the intent of this is to reinstate the previously existing
> > > asserts, only explicitly checking those places where we attach.
> >
> > No, the motivation is to prevern re-attaching an already attached vma
> > or re-detaching an already detached vma for state consistency. I guess
> > I should amend the description to make that clear.
>
> Sorry for noise, missed this reply.
>
> What I mean by this is, in a past iteration of this series I reviewed code
> where you did this but did _not_ differentiate between cases of new VMAs
> vs. existing, which caused an assert in your series which I reported.
>
> So I"m saying - now you _are_ differentiating between the two cases.
>
> It's certainly worth belabouring the point of exactly what it is you are
> trying to catch here, however! :) So yes please do add a little more to
> commit msg that'd be great, thanks!

Sure. How about:

With vma->detached being a separate flag, double-marking a vmas as
attached or detached is not an issue because the flag will simply be
overwritten with the same value. However once we fold this flag into
the refcount later in this series, re-attaching or re-detaching a vma
becomes an issue since these operations will be
incrementing/decrementing a refcount. Fix the places where we
currently re-attaching a vma during vma update and add assertions in
vma_mark_attached()/vma_mark_detached() to catch invalid usage.

>
> >
> > >
> > > I'm a little concerned that by doing this, somebody might simply invoke
> > > this function without realising the implications.
> >
> > Well, in that case somebody should get an assertion. If
> > vma_iter_store() is called against already attached vma, we get this
> > assertion:
> >
> > vma_iter_store()
> >   vma_mark_attached()
> >     vma_assert_detached()
> >
> > If vma_iter_store_attached() is called against a detached vma, we get this one:
> >
> > vma_iter_store_attached()
> >   vma_assert_attached()
> >
> > Does that address your concern?
> >
> > >
> > > Can we have something functional like
> > >
> > > vma_iter_store_new() and vma_iter_store_overwrite()
> >
> > Ok. A bit more churn but should not be too bad.
> >
> > >
> > > ?
> > >
> > > I don't like us just leaving vma_iter_store() quietly making an assumption
> > > that a caller doesn't necessarily realise.
> > >
> > > Also it's more greppable this way.
> > >
> > > I had a look through callers and it does seem you've snagged them all
> > > correctly.
> > >
> > > >
> > > > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > > > Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> > > > ---
> > > >  include/linux/mm.h | 12 ++++++++++++
> > > >  mm/vma.c           |  8 ++++----
> > > >  mm/vma.h           | 11 +++++++++--
> > > >  3 files changed, 25 insertions(+), 6 deletions(-)
> > > >
> > > > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > > > index 2b322871da87..2f805f1a0176 100644
> > > > --- a/include/linux/mm.h
> > > > +++ b/include/linux/mm.h
> > > > @@ -821,6 +821,16 @@ static inline void vma_assert_locked(struct vm_area_struct *vma)
> > > >               vma_assert_write_locked(vma);
> > > >  }
> > > >
> > > > +static inline void vma_assert_attached(struct vm_area_struct *vma)
> > > > +{
> > > > +     VM_BUG_ON_VMA(vma->detached, vma);
> > > > +}
> > > > +
> > > > +static inline void vma_assert_detached(struct vm_area_struct *vma)
> > > > +{
> > > > +     VM_BUG_ON_VMA(!vma->detached, vma);
> > > > +}
> > > > +
> > > >  static inline void vma_mark_attached(struct vm_area_struct *vma)
> > > >  {
> > > >       vma->detached = false;
> > > > @@ -866,6 +876,8 @@ static inline void vma_end_read(struct vm_area_struct *vma) {}
> > > >  static inline void vma_start_write(struct vm_area_struct *vma) {}
> > > >  static inline void vma_assert_write_locked(struct vm_area_struct *vma)
> > > >               { mmap_assert_write_locked(vma->vm_mm); }
> > > > +static inline void vma_assert_attached(struct vm_area_struct *vma) {}
> > > > +static inline void vma_assert_detached(struct vm_area_struct *vma) {}
> > > >  static inline void vma_mark_attached(struct vm_area_struct *vma) {}
> > > >  static inline void vma_mark_detached(struct vm_area_struct *vma) {}
> > > >
> > > > diff --git a/mm/vma.c b/mm/vma.c
> > > > index d603494e69d7..b9cf552e120c 100644
> > > > --- a/mm/vma.c
> > > > +++ b/mm/vma.c
> > > > @@ -660,14 +660,14 @@ static int commit_merge(struct vma_merge_struct *vmg,
> > > >       vma_set_range(vmg->vma, vmg->start, vmg->end, vmg->pgoff);
> > > >
> > > >       if (expanded)
> > > > -             vma_iter_store(vmg->vmi, vmg->vma);
> > > > +             vma_iter_store_attached(vmg->vmi, vmg->vma);
> > > >
> > > >       if (adj_start) {
> > > >               adjust->vm_start += adj_start;
> > > >               adjust->vm_pgoff += PHYS_PFN(adj_start);
> > > >               if (adj_start < 0) {
> > > >                       WARN_ON(expanded);
> > > > -                     vma_iter_store(vmg->vmi, adjust);
> > > > +                     vma_iter_store_attached(vmg->vmi, adjust);
> > > >               }
> > > >       }
> > >
> > > I kind of feel this whole function (that yes, I added :>) though derived
> > > from existing logic) needs rework, as it's necessarily rather confusing.
> > >
> > > But hey, that's on me :)
> > >
> > > But this does look right... OK see this as a note-to-self...
> > >
> > > >
> > > > @@ -2845,7 +2845,7 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
> > > >                               anon_vma_interval_tree_pre_update_vma(vma);
> > > >                               vma->vm_end = address;
> > > >                               /* Overwrite old entry in mtree. */
> > > > -                             vma_iter_store(&vmi, vma);
> > > > +                             vma_iter_store_attached(&vmi, vma);
> > > >                               anon_vma_interval_tree_post_update_vma(vma);
> > > >
> > > >                               perf_event_mmap(vma);
> > > > @@ -2925,7 +2925,7 @@ int expand_downwards(struct vm_area_struct *vma, unsigned long address)
> > > >                               vma->vm_start = address;
> > > >                               vma->vm_pgoff -= grow;
> > > >                               /* Overwrite old entry in mtree. */
> > > > -                             vma_iter_store(&vmi, vma);
> > > > +                             vma_iter_store_attached(&vmi, vma);
> > > >                               anon_vma_interval_tree_post_update_vma(vma);
> > > >
> > > >                               perf_event_mmap(vma);
> > > > diff --git a/mm/vma.h b/mm/vma.h
> > > > index 2a2668de8d2c..63dd38d5230c 100644
> > > > --- a/mm/vma.h
> > > > +++ b/mm/vma.h
> > > > @@ -365,9 +365,10 @@ static inline struct vm_area_struct *vma_iter_load(struct vma_iterator *vmi)
> > > >  }
> > > >
> > > >  /* Store a VMA with preallocated memory */
> > > > -static inline void vma_iter_store(struct vma_iterator *vmi,
> > > > -                               struct vm_area_struct *vma)
> > > > +static inline void vma_iter_store_attached(struct vma_iterator *vmi,
> > > > +                                        struct vm_area_struct *vma)
> > > >  {
> > > > +     vma_assert_attached(vma);
> > > >
> > > >  #if defined(CONFIG_DEBUG_VM_MAPLE_TREE)
> > > >       if (MAS_WARN_ON(&vmi->mas, vmi->mas.status != ma_start &&
> > > > @@ -390,7 +391,13 @@ static inline void vma_iter_store(struct vma_iterator *vmi,
> > > >
> > > >       __mas_set_range(&vmi->mas, vma->vm_start, vma->vm_end - 1);
> > > >       mas_store_prealloc(&vmi->mas, vma);
> > > > +}
> > > > +
> > > > +static inline void vma_iter_store(struct vma_iterator *vmi,
> > > > +                               struct vm_area_struct *vma)
> > > > +{
> > > >       vma_mark_attached(vma);
> > > > +     vma_iter_store_attached(vmi, vma);
> > > >  }
> > > >
> > >
> > > See comment at top, and we need some comments here to explain why we're
> > > going to pains to do this.
> >
> > Ack. I'll amend the patch description to make that clear.
> >
> > >
> > > What about mm/nommu.c? I guess these cases are always new VMAs.
> >
> > CONFIG_PER_VMA_LOCK depends on !CONFIG_NOMMU, so for nommu case all
> > these attach/detach functions become NOPs.
> >
> > >
> > > We probably definitely need to check this series in a nommu setup, have you
> > > done this? As I can see this breaking things. Then again I suppose you'd
> > > have expected bots to moan by now...
> > >
> > > >  static inline unsigned long vma_iter_addr(struct vma_iterator *vmi)
> > > > --
> > > > 2.47.1.613.gc27f4b7a9f-goog
> > > >

  reply	other threads:[~2025-01-13 19:09 UTC|newest]

Thread overview: 140+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-01-11  4:25 [PATCH v9 00/17] reimplement per-vma lock as a refcount Suren Baghdasaryan
2025-01-11  4:25 ` [PATCH v9 01/17] mm: introduce vma_start_read_locked{_nested} helpers Suren Baghdasaryan
2025-01-11  4:25 ` [PATCH v9 02/17] mm: move per-vma lock into vm_area_struct Suren Baghdasaryan
2025-01-11  4:25 ` [PATCH v9 03/17] mm: mark vma as detached until it's added into vma tree Suren Baghdasaryan
2025-01-11  4:25 ` [PATCH v9 04/17] mm: introduce vma_iter_store_attached() to use with attached vmas Suren Baghdasaryan
2025-01-13 11:58   ` Lorenzo Stoakes
2025-01-13 16:31     ` Suren Baghdasaryan
2025-01-13 16:44       ` Lorenzo Stoakes
2025-01-13 16:47       ` Lorenzo Stoakes
2025-01-13 19:09         ` Suren Baghdasaryan [this message]
2025-01-14 11:38           ` Lorenzo Stoakes
2025-01-11  4:25 ` [PATCH v9 05/17] mm: mark vmas detached upon exit Suren Baghdasaryan
2025-01-13 12:05   ` Lorenzo Stoakes
2025-01-13 17:02     ` Suren Baghdasaryan
2025-01-13 17:13       ` Lorenzo Stoakes
2025-01-13 19:11         ` Suren Baghdasaryan
2025-01-13 20:32           ` Vlastimil Babka
2025-01-13 20:42             ` Suren Baghdasaryan
2025-01-14 11:36               ` Lorenzo Stoakes
2025-01-11  4:25 ` [PATCH v9 06/17] types: move struct rcuwait into types.h Suren Baghdasaryan
2025-01-13 14:46   ` Lorenzo Stoakes
2025-01-11  4:25 ` [PATCH v9 07/17] mm: allow vma_start_read_locked/vma_start_read_locked_nested to fail Suren Baghdasaryan
2025-01-13 15:25   ` Lorenzo Stoakes
2025-01-13 17:53     ` Suren Baghdasaryan
2025-01-14 11:48       ` Lorenzo Stoakes
2025-01-11  4:25 ` [PATCH v9 08/17] mm: move mmap_init_lock() out of the header file Suren Baghdasaryan
2025-01-13 15:27   ` Lorenzo Stoakes
2025-01-13 17:53     ` Suren Baghdasaryan
2025-01-11  4:25 ` [PATCH v9 09/17] mm: uninline the main body of vma_start_write() Suren Baghdasaryan
2025-01-13 15:52   ` Lorenzo Stoakes
2025-01-11  4:25 ` [PATCH v9 10/17] refcount: introduce __refcount_{add|inc}_not_zero_limited Suren Baghdasaryan
2025-01-11  6:31   ` Hillf Danton
2025-01-11  9:59     ` Suren Baghdasaryan
2025-01-11 10:00       ` Suren Baghdasaryan
2025-01-11 12:13       ` Hillf Danton
2025-01-11 17:11         ` Suren Baghdasaryan
2025-01-11 23:44           ` Hillf Danton
2025-01-12  0:31             ` Suren Baghdasaryan
2025-01-15  9:39           ` Peter Zijlstra
2025-01-16 10:52             ` Hillf Danton
2025-01-11 12:39   ` David Laight
2025-01-11 17:07     ` Matthew Wilcox
2025-01-11 18:30     ` Paul E. McKenney
2025-01-11 22:19       ` David Laight
2025-01-11 22:50         ` [PATCH v9 10/17] refcount: introduce __refcount_{add|inc}_not_zero_limited - clang 17.0.1 bug David Laight
2025-01-12 11:37           ` David Laight
2025-01-12 17:56             ` Paul E. McKenney
2025-01-11  4:25 ` [PATCH v9 11/17] mm: replace vm_lock and detached flag with a reference count Suren Baghdasaryan
2025-01-11 11:24   ` Mateusz Guzik
2025-01-11 20:14     ` Suren Baghdasaryan
2025-01-11 20:16       ` Suren Baghdasaryan
2025-01-11 20:31       ` Mateusz Guzik
2025-01-11 20:58         ` Suren Baghdasaryan
2025-01-11 20:38       ` Vlastimil Babka
2025-01-13  1:47       ` Wei Yang
2025-01-13  2:25         ` Wei Yang
2025-01-13 21:14           ` Suren Baghdasaryan
2025-01-13 21:08         ` Suren Baghdasaryan
2025-01-15 10:48       ` Peter Zijlstra
2025-01-15 11:13         ` Peter Zijlstra
2025-01-15 15:00           ` Suren Baghdasaryan
2025-01-15 15:35             ` Peter Zijlstra
2025-01-15 15:38               ` Peter Zijlstra
2025-01-15 16:22                 ` Suren Baghdasaryan
2025-01-15 16:00           ` [PATCH] refcount: Strengthen inc_not_zero() Peter Zijlstra
2025-01-16 15:12             ` Suren Baghdasaryan
2025-01-17 15:41             ` Will Deacon
2025-01-27 14:09               ` Will Deacon
2025-01-27 19:21                 ` Suren Baghdasaryan
2025-01-28 23:51                   ` Suren Baghdasaryan
2025-02-06  2:52                     ` [PATCH 1/1] refcount: provide ops for cases when object's memory can be reused Suren Baghdasaryan
2025-02-06 10:41                       ` Vlastimil Babka
2025-02-06  3:03                     ` [PATCH] refcount: Strengthen inc_not_zero() Suren Baghdasaryan
2025-02-13 23:04                       ` Suren Baghdasaryan
2025-01-17 16:13             ` Matthew Wilcox
2025-01-12  2:59   ` [PATCH v9 11/17] mm: replace vm_lock and detached flag with a reference count Wei Yang
2025-01-12 17:35     ` Suren Baghdasaryan
2025-01-13  0:59       ` Wei Yang
2025-01-13  2:37   ` Wei Yang
2025-01-13 21:16     ` Suren Baghdasaryan
2025-01-13  9:36   ` Wei Yang
2025-01-13 21:18     ` Suren Baghdasaryan
2025-01-15  2:58   ` Wei Yang
2025-01-15  3:12     ` Suren Baghdasaryan
2025-01-15 12:05       ` Wei Yang
2025-01-15 15:01         ` Suren Baghdasaryan
2025-01-16  1:37           ` Wei Yang
2025-01-16  1:41             ` Suren Baghdasaryan
2025-01-16  9:10               ` Wei Yang
2025-01-11  4:25 ` [PATCH v9 12/17] mm: move lesser used vma_area_struct members into the last cacheline Suren Baghdasaryan
2025-01-13 16:15   ` Lorenzo Stoakes
2025-01-15 10:50   ` Peter Zijlstra
2025-01-15 16:39     ` Suren Baghdasaryan
2025-02-13 22:59       ` Suren Baghdasaryan
2025-01-11  4:26 ` [PATCH v9 13/17] mm/debug: print vm_refcnt state when dumping the vma Suren Baghdasaryan
2025-01-13 16:21   ` Lorenzo Stoakes
2025-01-13 16:35     ` Liam R. Howlett
2025-01-13 17:57       ` Suren Baghdasaryan
2025-01-14 11:41         ` Lorenzo Stoakes
2025-01-11  4:26 ` [PATCH v9 14/17] mm: remove extra vma_numab_state_init() call Suren Baghdasaryan
2025-01-13 16:28   ` Lorenzo Stoakes
2025-01-13 17:56     ` Suren Baghdasaryan
2025-01-14 11:45       ` Lorenzo Stoakes
2025-01-11  4:26 ` [PATCH v9 15/17] mm: prepare lock_vma_under_rcu() for vma reuse possibility Suren Baghdasaryan
2025-01-11  4:26 ` [PATCH v9 16/17] mm: make vma cache SLAB_TYPESAFE_BY_RCU Suren Baghdasaryan
2025-01-15  2:27   ` Wei Yang
2025-01-15  3:15     ` Suren Baghdasaryan
2025-01-15  3:58       ` Liam R. Howlett
2025-01-15  5:41         ` Suren Baghdasaryan
2025-01-15  3:59       ` Mateusz Guzik
2025-01-15  5:47         ` Suren Baghdasaryan
2025-01-15  5:51           ` Mateusz Guzik
2025-01-15  6:41             ` Suren Baghdasaryan
2025-01-15  7:58       ` Vlastimil Babka
2025-01-15 15:10         ` Suren Baghdasaryan
2025-02-13 22:56           ` Suren Baghdasaryan
2025-01-15 12:17       ` Wei Yang
2025-01-15 21:46         ` Suren Baghdasaryan
2025-01-11  4:26 ` [PATCH v9 17/17] docs/mm: document latest changes to vm_lock Suren Baghdasaryan
2025-01-13 16:33   ` Lorenzo Stoakes
2025-01-13 17:56     ` Suren Baghdasaryan
2025-01-11  4:52 ` [PATCH v9 00/17] reimplement per-vma lock as a refcount Matthew Wilcox
2025-01-11  9:45   ` Suren Baghdasaryan
2025-01-13 12:14 ` Lorenzo Stoakes
2025-01-13 16:58   ` Suren Baghdasaryan
2025-01-13 17:11     ` Lorenzo Stoakes
2025-01-13 19:00       ` Suren Baghdasaryan
2025-01-14 11:35         ` Lorenzo Stoakes
2025-01-14  1:49   ` Andrew Morton
2025-01-14  2:53     ` Suren Baghdasaryan
2025-01-14  4:09       ` Andrew Morton
2025-01-14  9:09         ` Vlastimil Babka
2025-01-14 10:27           ` Hillf Danton
2025-01-14  9:47         ` Lorenzo Stoakes
2025-01-14 14:59         ` Liam R. Howlett
2025-01-14 15:54           ` Suren Baghdasaryan
2025-01-15 11:34             ` Lorenzo Stoakes
2025-01-15 15:14               ` Suren Baghdasaryan
2025-01-28  5:26 ` Shivank Garg
2025-01-28  5:50   ` Suren Baghdasaryan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJuCfpFXwX+g0rCXAB_8s61VheOJZCBTSk1hyqrSWxqMPrE7MQ@mail.gmail.com \
    --to=surenb@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=brauner@kernel.org \
    --cc=corbet@lwn.net \
    --cc=dave@stgolabs.net \
    --cc=david.laight.linux@gmail.com \
    --cc=david@redhat.com \
    --cc=dhowells@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=hdanton@sina.com \
    --cc=hughd@google.com \
    --cc=jannh@google.com \
    --cc=kernel-team@android.com \
    --cc=klarasmodin@gmail.com \
    --cc=liam.howlett@oracle.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lokeshgidra@google.com \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@suse.com \
    --cc=minchan@google.com \
    --cc=mjguzik@gmail.com \
    --cc=oleg@redhat.com \
    --cc=oliver.sang@intel.com \
    --cc=pasha.tatashin@soleen.com \
    --cc=paulmck@kernel.org \
    --cc=peterx@redhat.com \
    --cc=peterz@infradead.org \
    --cc=richard.weiyang@gmail.com \
    --cc=shakeel.butt@linux.dev \
    --cc=souravpanda@google.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).