From: Hugh Dickins <hughd@google.com>
To: David Hildenbrand <david@redhat.com>
Cc: Hugh Dickins <hughd@google.com>, Will Deacon <will@kernel.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Keir Fraser <keirf@google.com>, Jason Gunthorpe <jgg@ziepe.ca>,
John Hubbard <jhubbard@nvidia.com>,
Frederick Mayle <fmayle@google.com>,
Andrew Morton <akpm@linux-foundation.org>,
Peter Xu <peterx@redhat.com>, Rik van Riel <riel@surriel.com>,
Vlastimil Babka <vbabka@suse.cz>, Ge Yang <yangge1116@126.com>
Subject: Re: [PATCH] mm/gup: Drain batched mlock folio processing before attempting migration
Date: Thu, 28 Aug 2025 09:12:30 -0700 (PDT) [thread overview]
Message-ID: <3194a67b-194c-151d-a961-08c0d0f24d9b@google.com> (raw)
In-Reply-To: <a0d1d889-c711-494b-a85a-33cbde4688ba@redhat.com>
On Thu, 28 Aug 2025, David Hildenbrand wrote:
> On 28.08.25 10:47, Hugh Dickins wrote:
...
> > It took several days in search of the least bad compromise, but
> > in the end I concluded the opposite of what we'd intended above.
> >
> > There is a fundamental incompatibility between my 5.18 2fbb0c10d1e8
> > ("mm/munlock: mlock_page() munlock_page() batch by pagevec")
> > and Ge Yang's 6.11 33dfe9204f29
> > ("mm/gup: clear the LRU flag of a page before adding to LRU batch").
> >
> > It turns out that the mm/swap.c folio batches (apart from lru_add)
> > are all for best-effort, doesn't matter if it's missed, operations;
> > whereas mlock and munlock are more serious. Probably mlock could
> > be (not very satisfactorily) converted, but then munlock? Because
> > of failed folio_test_clear_lru()s, it would be far too likely to
> > err on either side, munlocking too soon or too late.
> >
> > I've concluded that one or the other has to go. If we're having
> > a beauty contest, there's no doubt that 33dfe9204f29 is much nicer
> > than 2fbb0c10d1e8 (which is itself far from perfect). But functionally,
> > I'm afraid that removing the mlock/munlock batching will show up as a
> > perceptible regression in realistic workloadsg; and on consideration,
> > I've found no real justification for the LRU flag clearing change.
>
> Just to understand what you are saying: are you saying that we will go back to
> having a folio being part of multiple LRU caches?
Yes. Well, if you count the mlock/munlock batches in as "LRU caches",
then that has been so all along.
> :/ If so, I really rally
> hope that we can find another way and not go back to that old handling.
For what reason? It sounded like a nice "invariant" to keep in mind,
but it's a limitation, and what purpose was it actually serving?
If it's the "spare room in struct page to keep the address of that one
batch entry ... efficiently extract ..." that I was dreaming of: that
has to be a future thing, when perhaps memdescs will allow an extensible
structure to be attached, and extending it for an mlocked folio (to hold
the mlock_count instead of squeezing it into lru.prev) would not need
mlock/munlock batching at all (I guess: far from uppermost in my mind!),
and including a field for "efficiently extract" from LRU batch would be
nice.
But, memdescs or not, there will always be pressure to keep the common
struct as small as possible, so I don't know if we would actually go
that way.
But I suspect that was not your reason at all: please illuminate.
Hugh
next prev parent reply other threads:[~2025-08-28 16:18 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-15 10:18 [PATCH] mm/gup: Drain batched mlock folio processing before attempting migration Will Deacon
2025-08-16 1:03 ` John Hubbard
2025-08-16 4:33 ` Hugh Dickins
2025-08-18 13:38 ` Will Deacon
2025-08-16 4:14 ` Hugh Dickins
2025-08-16 8:15 ` David Hildenbrand
2025-08-18 13:31 ` Will Deacon
2025-08-18 14:31 ` Will Deacon
2025-08-25 1:25 ` Hugh Dickins
2025-08-25 16:04 ` David Hildenbrand
2025-08-28 8:47 ` Hugh Dickins
2025-08-28 8:59 ` David Hildenbrand
2025-08-28 16:12 ` Hugh Dickins [this message]
2025-08-28 20:38 ` David Hildenbrand
2025-08-29 1:58 ` Hugh Dickins
2025-08-29 8:56 ` David Hildenbrand
2025-08-29 11:57 ` Will Deacon
2025-08-29 13:21 ` Will Deacon
2025-08-29 16:04 ` Hugh Dickins
2025-08-29 15:46 ` Hugh Dickins
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3194a67b-194c-151d-a961-08c0d0f24d9b@google.com \
--to=hughd@google.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=fmayle@google.com \
--cc=jgg@ziepe.ca \
--cc=jhubbard@nvidia.com \
--cc=keirf@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=peterx@redhat.com \
--cc=riel@surriel.com \
--cc=vbabka@suse.cz \
--cc=will@kernel.org \
--cc=yangge1116@126.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).