From: Andrew Morton <akpm@linux-foundation.org>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: Nicolas Saenz Julienne <nsaenzju@redhat.com>,
Marcelo Tosatti <mtosatti@redhat.com>,
Vlastimil Babka <vbabka@suse.cz>,
Michal Hocko <mhocko@kernel.org>,
LKML <linux-kernel@vger.kernel.org>,
Linux-MM <linux-mm@kvack.org>
Subject: Re: [PATCH 0/6] Drain remote per-cpu directly v3
Date: Fri, 13 May 2022 12:38:05 -0700 [thread overview]
Message-ID: <20220513123805.41e560392d028c271b36847d@linux-foundation.org> (raw)
In-Reply-To: <20220513142330.GI3441@techsingularity.net>
On Fri, 13 May 2022 15:23:30 +0100 Mel Gorman <mgorman@techsingularity.net> wrote:
> Correct.
>
> > > the draining in non-deterministic.
> >
> > s/n/s/;)
> >
>
> Think that one is ok. At least spell check did not complain.
s/in/si/
> > > Currently an IRQ-safe local_lock protects the page allocator per-cpu lists.
> > > The local_lock on its own prevents migration and the IRQ disabling protects
> > > from corruption due to an interrupt arriving while a page allocation is
> > > in progress. The locking is inherently unsafe for remote access unless
> > > the CPU is hot-removed.
> >
> > I don't understand the final sentence here. Which CPU and why does
> > hot-removing it make the locking safe?
> >
>
> The sentence can be dropped because it adds little and is potentially
> confusing. The PCP being safe to access remotely is specific to the
> context of the CPU being hot-removed and there are other special corner
> cases like zone_pcp_disable that modifies a per-cpu structure remotely
> but not in a way that causes corruption.
OK. I pasted in your para from the other email. Current 0/n blurb:
Some setups, notably NOHZ_FULL CPUs, may be running realtime or
latency-sensitive applications that cannot tolerate interference due to
per-cpu drain work queued by __drain_all_pages(). Introduce a new
mechanism to remotely drain the per-cpu lists. It is made possible by
remotely locking 'struct per_cpu_pages' new per-cpu spinlocks. This has
two advantages, the time to drain is more predictable and other unrelated
tasks are not interrupted.
This series has the same intent as Nicolas' series "mm/page_alloc: Remote
per-cpu lists drain support" -- avoid interference of a high priority task
due to a workqueue item draining per-cpu page lists. While many workloads
can tolerate a brief interruption, it may cause a real-time task running
on a NOHZ_FULL CPU to miss a deadline and at minimum, the draining is
non-deterministic.
Currently an IRQ-safe local_lock protects the page allocator per-cpu
lists. The local_lock on its own prevents migration and the IRQ disabling
protects from corruption due to an interrupt arriving while a page
allocation is in progress.
This series adjusts the locking. A spinlock is added to struct
per_cpu_pages to protect the list contents while local_lock_irq continues
to prevent migration and IRQ reentry. This allows a remote CPU to safely
drain a remote per-cpu list.
This series is a partial series. Follow-on work should allow the
local_irq_save to be converted to a local_irq to avoid IRQs being
disabled/enabled in most cases. Consequently, there are some TODO
comments highlighting the places that would change if local_irq was used.
However, there are enough corner cases that it deserves a series on its
own separated by one kernel release and the priority right now is to avoid
interference of high priority tasks.
Patch 1 is a cosmetic patch to clarify when page->lru is storing buddy pages
and when it is storing per-cpu pages.
Patch 2 shrinks per_cpu_pages to make room for a spin lock. Strictly speaking
this is not necessary but it avoids per_cpu_pages consuming another
cache line.
Patch 3 is a preparation patch to avoid code duplication.
Patch 4 is a simple micro-optimisation that improves code flow necessary for
a later patch to avoid code duplication.
Patch 5 uses a spin_lock to protect the per_cpu_pages contents while still
relying on local_lock to prevent migration, stabilise the pcp
lookup and prevent IRQ reentrancy.
Patch 6 remote drains per-cpu pages directly instead of using a workqueue.
next prev parent reply other threads:[~2022-05-13 19:38 UTC|newest]
Thread overview: 44+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-05-12 8:50 [PATCH 0/6] Drain remote per-cpu directly v3 Mel Gorman
2022-05-12 8:50 ` [PATCH 1/6] mm/page_alloc: Add page->buddy_list and page->pcp_list Mel Gorman
2022-05-13 11:59 ` Nicolas Saenz Julienne
2022-05-19 9:36 ` Vlastimil Babka
2022-05-12 8:50 ` [PATCH 2/6] mm/page_alloc: Use only one PCP list for THP-sized allocations Mel Gorman
2022-05-19 9:45 ` Vlastimil Babka
2022-05-12 8:50 ` [PATCH 3/6] mm/page_alloc: Split out buddy removal code from rmqueue into separate helper Mel Gorman
2022-05-13 12:01 ` Nicolas Saenz Julienne
2022-05-19 9:52 ` Vlastimil Babka
2022-05-23 16:09 ` Qais Yousef
2022-05-24 11:55 ` Mel Gorman
2022-05-25 11:23 ` Qais Yousef
2022-05-12 8:50 ` [PATCH 4/6] mm/page_alloc: Remove unnecessary page == NULL check in rmqueue Mel Gorman
2022-05-13 12:03 ` Nicolas Saenz Julienne
2022-05-19 10:57 ` Vlastimil Babka
2022-05-19 12:13 ` Mel Gorman
2022-05-19 12:26 ` Vlastimil Babka
2022-05-12 8:50 ` [PATCH 5/6] mm/page_alloc: Protect PCP lists with a spinlock Mel Gorman
2022-05-13 12:22 ` Nicolas Saenz Julienne
2022-05-12 8:50 ` [PATCH 6/6] mm/page_alloc: Remotely drain per-cpu lists Mel Gorman
2022-05-12 19:37 ` Andrew Morton
2022-05-13 15:04 ` Mel Gorman
2022-05-13 15:19 ` Nicolas Saenz Julienne
2022-05-13 18:23 ` Mel Gorman
2022-05-17 12:57 ` Mel Gorman
2022-05-12 19:43 ` [PATCH 0/6] Drain remote per-cpu directly v3 Andrew Morton
2022-05-13 14:23 ` Mel Gorman
2022-05-13 19:38 ` Andrew Morton [this message]
2022-05-16 10:53 ` Mel Gorman
2022-05-13 12:24 ` Nicolas Saenz Julienne
2022-05-17 23:35 ` Qian Cai
2022-05-18 12:51 ` Mel Gorman
2022-05-18 16:27 ` Qian Cai
2022-05-18 17:15 ` Paul E. McKenney
2022-05-19 13:29 ` Qian Cai
2022-05-19 19:15 ` Paul E. McKenney
2022-05-19 21:05 ` Qian Cai
2022-05-19 21:29 ` Paul E. McKenney
2022-05-18 17:26 ` Marcelo Tosatti
2022-05-18 17:44 ` Marcelo Tosatti
2022-05-18 18:01 ` Nicolas Saenz Julienne
2022-05-26 17:19 ` Qian Cai
2022-05-27 8:39 ` Mel Gorman
2022-05-27 12:58 ` Qian Cai
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220513123805.41e560392d028c271b36847d@linux-foundation.org \
--to=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@kernel.org \
--cc=mtosatti@redhat.com \
--cc=nsaenzju@redhat.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).