From: Marcelo Tosatti <mtosatti@redhat.com>
To: Nicolas Saenz Julienne <nsaenzju@redhat.com>
Cc: Vlastimil Babka <vbabka@suse.cz>,
akpm@linux-foundation.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org, frederic@kernel.org, tglx@linutronix.de,
peterz@infradead.org, nilal@redhat.com, mgorman@suse.de,
linux-rt-users@vger.kernel.org, cl@linux.com, ppandit@redhat.com
Subject: Re: [PATCH v2 0/3] mm/page_alloc: Remote per-cpu page list drain support
Date: Wed, 1 Dec 2021 11:01:50 -0300 [thread overview]
Message-ID: <20211201140149.GA12861@fuller.cnet> (raw)
In-Reply-To: <43462fe11258395f4e885c3d594a3ed1b604b858.camel@redhat.com>
On Tue, Nov 30, 2021 at 07:09:23PM +0100, Nicolas Saenz Julienne wrote:
> Hi Vlastimil, sorry for the late reply and thanks for your feedback. :)
>
> On Tue, 2021-11-23 at 15:58 +0100, Vlastimil Babka wrote:
> > > [1] Other approaches can be found here:
> > >
> > > - Static branch conditional on nohz_full, no performance loss, the extra
> > > config option makes is painful to maintain (v1):
> > > https://lore.kernel.org/linux-mm/20210921161323.607817-5-nsaenzju@redhat.com/
> > >
> > > - RCU based approach, complex, yet a bit less taxing performance wise
> > > (RFC):
> > > https://lore.kernel.org/linux-mm/20211008161922.942459-4-nsaenzju@redhat.com/
> >
> > Hm I wonder if there might still be another alternative possible. IIRC I did
> > propose at some point a local drain on the NOHZ cpu before returning to
> > userspace, and then avoiding that cpu in remote drains, but tglx didn't like
> > the idea of making entering the NOHZ full mode more expensive [1].
> >
> > But what if we instead set pcp->high = 0 for these cpus so they would avoid
> > populating the pcplists in the first place? Then there wouldn't have to be a
> > drain at all. On the other hand page allocator operations would not benefit
> > from zone lock batching on those cpus. But perhaps that would be acceptable
> > tradeoff, as a nohz cpu is expected to run in userspace most of the time,
> > and page allocator operations are rare except maybe some initial page
> > faults? (I assume those kind of workloads pre-populate and/or mlock their
> > address space anyway).
>
> I've looked a bit into this and it seems straightforward. Our workloads
> pre-populate everything, and a slight statup performance hit is not that tragic
> (I'll measure it nonetheless). The per-cpu nohz_full state at some point will
> be dynamic, but the feature seems simple to disable/enable. I'll have to teach
> __drain_all_pages(zone, force_all_cpus=true) to bypass this special case
> but that's all. I might have a go at this.
>
> Thanks!
>
> --
> Nicolás Sáenz
True, but a nohz cpu does not necessarily have to run in userspace most
of the time. For example, an application can enter nohz full mode,
go back to userspace, idle, return from idle all without leaving
nohz_full mode.
So its not clear that nohz_full is an appropriate trigger for setting
pcp->high = 0. Perhaps a task isolation feature would be an appropriate
location.
prev parent reply other threads:[~2021-12-02 16:27 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-11-03 17:05 [PATCH v2 0/3] mm/page_alloc: Remote per-cpu page list drain support Nicolas Saenz Julienne
2021-11-03 17:05 ` [PATCH v2 1/3] mm/page_alloc: Don't pass pfn to free_unref_page_commit() Nicolas Saenz Julienne
2021-11-23 14:41 ` Vlastimil Babka
2021-11-03 17:05 ` [PATCH v2 2/3] mm/page_alloc: Convert per-cpu lists' local locks to per-cpu spin locks Nicolas Saenz Julienne
2021-11-04 14:38 ` [mm/page_alloc] 5541e53659: BUG:spinlock_bad_magic_on_CPU kernel test robot
2021-11-04 16:39 ` Nicolas Saenz Julienne
2021-11-03 17:05 ` [PATCH v2 3/3] mm/page_alloc: Remotely drain per-cpu lists Nicolas Saenz Julienne
2021-12-03 14:13 ` Mel Gorman
2021-12-09 10:50 ` Nicolas Saenz Julienne
2021-12-09 17:45 ` Marcelo Tosatti
2021-12-10 10:55 ` Mel Gorman
2021-12-14 10:58 ` Marcelo Tosatti
2021-12-14 11:42 ` Christoph Lameter
2021-12-14 12:25 ` Marcelo Tosatti
2021-11-23 14:58 ` [PATCH v2 0/3] mm/page_alloc: Remote per-cpu page list drain support Vlastimil Babka
2021-11-30 18:09 ` Nicolas Saenz Julienne
2021-12-01 14:01 ` Marcelo Tosatti [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211201140149.GA12861@fuller.cnet \
--to=mtosatti@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=frederic@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-rt-users@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=nilal@redhat.com \
--cc=nsaenzju@redhat.com \
--cc=peterz@infradead.org \
--cc=ppandit@redhat.com \
--cc=tglx@linutronix.de \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).