From: Nicolas Saenz Julienne <nsaenzju@redhat.com>
To: Mel Gorman <mgorman@techsingularity.net>,
Andrew Morton <akpm@linux-foundation.org>
Cc: Marcelo Tosatti <mtosatti@redhat.com>,
Vlastimil Babka <vbabka@suse.cz>,
Michal Hocko <mhocko@kernel.org>, Hugh Dickins <hughd@google.com>,
Yu Zhao <yuzhao@google.com>,
Marek Szyprowski <m.szyprowski@samsung.com>,
LKML <linux-kernel@vger.kernel.org>,
Linux-MM <linux-mm@kvack.org>
Subject: Re: [PATCH 5/7] mm/page_alloc: Protect PCP lists with a spinlock
Date: Mon, 04 Jul 2022 18:32:06 +0200 [thread overview]
Message-ID: <7c66ffb07a06f1c64985c3b6e3c212f1f247a652.camel@redhat.com> (raw)
In-Reply-To: <20220624125423.6126-6-mgorman@techsingularity.net>
On Fri, 2022-06-24 at 13:54 +0100, Mel Gorman wrote:
> Currently the PCP lists are protected by using local_lock_irqsave to
> prevent migration and IRQ reentrancy but this is inconvenient. Remote
> draining of the lists is impossible and a workqueue is required and every
> task allocation/free must disable then enable interrupts which is
> expensive.
>
> As preparation for dealing with both of those problems, protect the lists
> with a spinlock. The IRQ-unsafe version of the lock is used because IRQs
> are already disabled by local_lock_irqsave. spin_trylock is used in
> preparation for a time when local_lock could be used instead of
> lock_lock_irqsave.
>
> The per_cpu_pages still fits within the same number of cache lines after
> this patch relative to before the series.
>
> struct per_cpu_pages {
> spinlock_t lock; /* 0 4 */
> int count; /* 4 4 */
> int high; /* 8 4 */
> int batch; /* 12 4 */
> short int free_factor; /* 16 2 */
> short int expire; /* 18 2 */
>
> /* XXX 4 bytes hole, try to pack */
>
> struct list_head lists[13]; /* 24 208 */
>
> /* size: 256, cachelines: 4, members: 7 */
> /* sum members: 228, holes: 1, sum holes: 4 */
> /* padding: 24 */
> } __attribute__((__aligned__(64)));
>
> There is overhead in the fast path due to acquiring the spinlock even
> though the spinlock is per-cpu and uncontended in the common case. Page
> Fault Test (PFT) running on a 1-socket reported the following results on a
> 1 socket machine.
>
> 5.19.0-rc3 5.19.0-rc3
> vanilla mm-pcpspinirq-v5r16
> Hmean faults/sec-1 869275.7381 ( 0.00%) 874597.5167 * 0.61%*
> Hmean faults/sec-3 2370266.6681 ( 0.00%) 2379802.0362 * 0.40%*
> Hmean faults/sec-5 2701099.7019 ( 0.00%) 2664889.7003 * -1.34%*
> Hmean faults/sec-7 3517170.9157 ( 0.00%) 3491122.8242 * -0.74%*
> Hmean faults/sec-8 3965729.6187 ( 0.00%) 3939727.0243 * -0.66%*
>
> There is a small hit in the number of faults per second but given that the
> results are more stable, it's borderline noise.
>
> Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
> ---
Reviewed-by: Nicolas Saenz Julienne <nsaenzju@redhat.com>
Tested-by: Nicolas Saenz Julienne <nsaenzju@redhat.com>
Thanks!
--
Nicolás Sáenz
next prev parent reply other threads:[~2022-07-04 16:32 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-06-24 12:54 [PATCH v5 00/7] Drain remote per-cpu directly Mel Gorman
2022-06-24 12:54 ` [PATCH 1/7] mm/page_alloc: Add page->buddy_list and page->pcp_list Mel Gorman
2022-06-24 12:54 ` [PATCH 2/7] mm/page_alloc: Use only one PCP list for THP-sized allocations Mel Gorman
2022-06-24 12:54 ` [PATCH 3/7] mm/page_alloc: Split out buddy removal code from rmqueue into separate helper Mel Gorman
2022-06-24 12:54 ` [PATCH 4/7] mm/page_alloc: Remove mistaken page == NULL check in rmqueue Mel Gorman
2022-06-24 12:54 ` [PATCH 5/7] mm/page_alloc: Protect PCP lists with a spinlock Mel Gorman
2022-07-04 12:31 ` Vlastimil Babka
2022-07-05 7:20 ` Mel Gorman
2022-07-04 16:32 ` Nicolas Saenz Julienne [this message]
2022-06-24 12:54 ` [PATCH 6/7] mm/page_alloc: Remotely drain per-cpu lists Mel Gorman
2022-07-04 14:28 ` Vlastimil Babka
2022-06-24 12:54 ` [PATCH 7/7] mm/page_alloc: Replace local_lock with normal spinlock Mel Gorman
2022-06-24 18:59 ` Yu Zhao
2022-06-27 8:46 ` [PATCH] mm/page_alloc: Replace local_lock with normal spinlock -fix Mel Gorman
2022-07-04 14:39 ` [PATCH 7/7] mm/page_alloc: Replace local_lock with normal spinlock Vlastimil Babka
2022-07-04 16:33 ` Nicolas Saenz Julienne
2022-07-03 23:28 ` [PATCH v5 00/7] Drain remote per-cpu directly Andrew Morton
2022-07-03 23:31 ` Yu Zhao
2022-07-03 23:35 ` Andrew Morton
-- strict thread matches above, loose matches on Subject: below --
2022-06-13 12:56 [PATCH v4 " Mel Gorman
2022-06-13 12:56 ` [PATCH 5/7] mm/page_alloc: Protect PCP lists with a spinlock Mel Gorman
2022-06-16 15:59 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7c66ffb07a06f1c64985c3b6e3c212f1f247a652.camel@redhat.com \
--to=nsaenzju@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=hughd@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=m.szyprowski@samsung.com \
--cc=mgorman@techsingularity.net \
--cc=mhocko@kernel.org \
--cc=mtosatti@redhat.com \
--cc=vbabka@suse.cz \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).