From: Andrew Morton <akpm@linux-foundation.org>
To: Yu Zhao <yuzhao@google.com>
Cc: Mel Gorman <mgorman@techsingularity.net>,
Nicolas Saenz Julienne <nsaenzju@redhat.com>,
Marcelo Tosatti <mtosatti@redhat.com>,
Vlastimil Babka <vbabka@suse.cz>,
Michal Hocko <mhocko@kernel.org>, Hugh Dickins <hughd@google.com>,
Marek Szyprowski <m.szyprowski@samsung.com>,
LKML <linux-kernel@vger.kernel.org>,
Linux-MM <linux-mm@kvack.org>
Subject: Re: [PATCH v5 00/7] Drain remote per-cpu directly
Date: Sun, 3 Jul 2022 16:35:31 -0700 [thread overview]
Message-ID: <20220703163531.beed10c723f1c74a9001573c@linux-foundation.org> (raw)
In-Reply-To: <CAOUHufZj87ewG6_OObmDByxHv51DgbkB-O6oMitw72QF1JrkcQ@mail.gmail.com>
On Sun, 3 Jul 2022 17:31:09 -0600 Yu Zhao <yuzhao@google.com> wrote:
> > > This series adjusts the locking. A spinlock is added to struct
> > > per_cpu_pages to protect the list contents while local_lock_irq is
> > > ultimately replaced by just the spinlock in the final patch. This allows
> > > a remote CPU to safely. Follow-on work should allow the spin_lock_irqsave
> > > to be converted to spin_lock to avoid IRQs being disabled/enabled in
> > > most cases. The follow-on patch will be one kernel release later as it
> > > is relatively high risk and it'll make bisections more clear if there
> > > are any problems.
> >
> > I plan to move this and Mel's fix to [7/7] into mm-stable around July 8.
>
> I've thrown it together with the Maple Tree and passed a series of stress tests.
Cool, thanks. I added your Tested-by: to everything.
prev parent reply other threads:[~2022-07-03 23:35 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-06-24 12:54 [PATCH v5 00/7] Drain remote per-cpu directly Mel Gorman
2022-06-24 12:54 ` [PATCH 1/7] mm/page_alloc: Add page->buddy_list and page->pcp_list Mel Gorman
2022-06-24 12:54 ` [PATCH 2/7] mm/page_alloc: Use only one PCP list for THP-sized allocations Mel Gorman
2022-06-24 12:54 ` [PATCH 3/7] mm/page_alloc: Split out buddy removal code from rmqueue into separate helper Mel Gorman
2022-06-24 12:54 ` [PATCH 4/7] mm/page_alloc: Remove mistaken page == NULL check in rmqueue Mel Gorman
2022-06-24 12:54 ` [PATCH 5/7] mm/page_alloc: Protect PCP lists with a spinlock Mel Gorman
2022-07-04 12:31 ` Vlastimil Babka
2022-07-05 7:20 ` Mel Gorman
2022-07-04 16:32 ` Nicolas Saenz Julienne
2022-06-24 12:54 ` [PATCH 6/7] mm/page_alloc: Remotely drain per-cpu lists Mel Gorman
2022-07-04 14:28 ` Vlastimil Babka
2022-06-24 12:54 ` [PATCH 7/7] mm/page_alloc: Replace local_lock with normal spinlock Mel Gorman
2022-06-24 18:59 ` Yu Zhao
2022-06-27 8:46 ` [PATCH] mm/page_alloc: Replace local_lock with normal spinlock -fix Mel Gorman
2022-07-04 14:39 ` [PATCH 7/7] mm/page_alloc: Replace local_lock with normal spinlock Vlastimil Babka
2022-07-04 16:33 ` Nicolas Saenz Julienne
2022-07-03 23:28 ` [PATCH v5 00/7] Drain remote per-cpu directly Andrew Morton
2022-07-03 23:31 ` Yu Zhao
2022-07-03 23:35 ` Andrew Morton [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220703163531.beed10c723f1c74a9001573c@linux-foundation.org \
--to=akpm@linux-foundation.org \
--cc=hughd@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=m.szyprowski@samsung.com \
--cc=mgorman@techsingularity.net \
--cc=mhocko@kernel.org \
--cc=mtosatti@redhat.com \
--cc=nsaenzju@redhat.com \
--cc=vbabka@suse.cz \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).