linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Marcelo Tosatti <mtosatti@redhat.com>
To: Christoph Lameter <cl@gentwo.de>
Cc: Aaron Tomlin <atomlin@atomlin.com>,
	Frederic Weisbecker <frederic@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH v2 02/11] this_cpu_cmpxchg: ARM64: switch this_cpu_cmpxchg to locked, add _local function
Date: Thu, 16 Mar 2023 07:54:29 -0300	[thread overview]
Message-ID: <ZBL1ZToOqV5aKX49@tpad> (raw)
In-Reply-To: <df9e3ddc-e3be-4ceb-af7f-e5557b9228df@gentwo.de>

On Thu, Mar 16, 2023 at 12:56:20AM +0100, Christoph Lameter wrote:
> On Thu, 9 Feb 2023, Marcelo Tosatti wrote:
> 
> > Goal is to have vmstat_shepherd to transfer from
> > per-CPU counters to global counters remotely. For this,
> > an atomic this_cpu_cmpxchg is necessary.
> 
> The definition for this_cpu_functionality is that it is *not* incurring
> atomic overhead and it was introduced to *avoid* the overhead of atomic
> operations.
> 
> This sabotages this_cpu functionality,

Christoph,

Two points:

1) If you look at patch 6, users of this_cpu_cmpxchg are converted
to this_cpu_cmpxchg_local (except per-CPU vmstat counters).
Its up to the user of the interface, depending on its requirements,
to decide whether or not atomic operations are necessary
(atomic with reference to other processors).

this_cpu_cmpxchg still has the benefits of use of segment registers:

:Author: Christoph Lameter, August 4th, 2014
:Author: Pranith Kumar, Aug 2nd, 2014

this_cpu operations are a way of optimizing access to per cpu
variables associated with the *currently* executing processor. This is
done through the use of segment registers (or a dedicated register where
the cpu permanently stored the beginning of the per cpu area for a
specific processor).

this_cpu operations add a per cpu variable offset to the processor
specific per cpu base and encode that operation in the instruction
operating on the per cpu variable.

This means that there are no atomicity issues between the calculation of
the offset and the operation on the data. Therefore it is not
necessary to disable preemption or interrupts to ensure that the
processor is not changed between the calculation of the address and
the operation on the data.

2) The performance results seem to indicate that 
cache locking is effective on modern processors (on this particular case and others as well):

4b23a68f953628eb4e4b7fe1294ebf93d4b8ceee mm/page_alloc: protect PCP lists with a spinlock

    As preparation for dealing with both of those problems, protect the
    lists with a spinlock.  The IRQ-unsafe version of the lock is used
    because IRQs are already disabled by local_lock_irqsave.  spin_trylock
    is used in combination with local_lock_irqsave() but later will be
    replaced with a spin_trylock_irqsave when the local_lock is removed.

    The per_cpu_pages still fits within the same number of cache lines after
    this patch relative to before the series.

    struct per_cpu_pages {
            spinlock_t                 lock;                 /*     0     4 */
            int                        count;                /*     4     4 */
            int                        high;                 /*     8     4 */
            int                        batch;                /*    12     4 */
            short int                  free_factor;          /*    16     2 */
            short int                  expire;               /*    18     2 */

            /* XXX 4 bytes hole, try to pack */

            struct list_head           lists[13];            /*    24   208 */

            /* size: 256, cachelines: 4, members: 7 */
            /* sum members: 228, holes: 1, sum holes: 4 */
            /* padding: 24 */
    } __attribute__((__aligned__(64)));

    There is overhead in the fast path due to acquiring the spinlock even
    though the spinlock is per-cpu and uncontended in the common case.  Page
    Fault Test (PFT) running on a 1-socket reported the following results on a
    1 socket machine.

                                         5.19.0-rc3               5.19.0-rc3
                                            vanilla      mm-pcpspinirq-v5r16
    Hmean     faults/sec-1   869275.7381 (   0.00%)   874597.5167 *   0.61%*
    Hmean     faults/sec-3  2370266.6681 (   0.00%)  2379802.0362 *   0.40%*
    Hmean     faults/sec-5  2701099.7019 (   0.00%)  2664889.7003 *  -1.34%*
    Hmean     faults/sec-7  3517170.9157 (   0.00%)  3491122.8242 *  -0.74%*
    Hmean     faults/sec-8  3965729.6187 (   0.00%)  3939727.0243 *  -0.66%*

And for this case:

To test the performance difference, a page allocator microbenchmark:
https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/mm/bench/page_bench01.c
with loops=1000000 was used, on Intel Core i7-11850H @ 2.50GHz.

For the single_page_alloc_free test, which does

        /** Loop to measure **/
        for (i = 0; i < rec->loops; i++) {
                my_page = alloc_page(gfp_mask);
                if (unlikely(my_page == NULL))
                        return 0;
                __free_page(my_page);
        }

Unit is cycles.

Vanilla                 Patched         Diff
115.25                  117             1.4%

(to be honest, the results are in the noise as well, during the tests
the "LOCK cmpxchg" shows no significant difference to the "cmpxchg"
version for the page allocator benchmark).



  reply	other threads:[~2023-03-16 12:53 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-09 15:01 [PATCH v2 00/11] fold per-CPU vmstats remotely Marcelo Tosatti
2023-02-09 15:01 ` [PATCH v2 01/11] mm/vmstat: remove remote node draining Marcelo Tosatti
2023-02-28 15:53   ` David Hildenbrand
2023-02-28 19:36     ` Marcelo Tosatti
2023-03-02 10:10       ` David Hildenbrand
2023-03-21 15:20         ` Mel Gorman
2023-03-21 17:31           ` Marcelo Tosatti
2023-03-02 17:21   ` Peter Xu
2023-03-02 17:27     ` Peter Xu
2023-03-02 19:17       ` Marcelo Tosatti
2023-03-02 18:56     ` Marcelo Tosatti
2023-02-09 15:01 ` [PATCH v2 02/11] this_cpu_cmpxchg: ARM64: switch this_cpu_cmpxchg to locked, add _local function Marcelo Tosatti
2023-03-02 10:42   ` David Hildenbrand
2023-03-02 10:51     ` David Hildenbrand
2023-03-02 14:32     ` Marcelo Tosatti
2023-03-02 20:53   ` Peter Xu
2023-03-02 21:04     ` Marcelo Tosatti
2023-03-02 21:25       ` Peter Xu
2023-03-03 15:39         ` Marcelo Tosatti
2023-03-03 15:47     ` Marcelo Tosatti
2023-03-15 23:56   ` Christoph Lameter
2023-03-16 10:54     ` Marcelo Tosatti [this message]
2023-02-09 15:01 ` [PATCH v2 03/11] this_cpu_cmpxchg: loongarch: " Marcelo Tosatti
2023-02-09 15:01 ` [PATCH v2 04/11] this_cpu_cmpxchg: S390: " Marcelo Tosatti
2023-02-09 15:01 ` [PATCH v2 05/11] this_cpu_cmpxchg: x86: " Marcelo Tosatti
2023-02-09 15:01 ` [PATCH v2 06/11] this_cpu_cmpxchg: asm-generic: " Marcelo Tosatti
2023-02-09 15:01 ` [PATCH v2 07/11] convert this_cpu_cmpxchg users to this_cpu_cmpxchg_local Marcelo Tosatti
2023-03-02 20:54   ` Peter Xu
2023-02-09 15:01 ` [PATCH v2 08/11] mm/vmstat: switch counter modification to cmpxchg Marcelo Tosatti
2023-03-02 10:47   ` David Hildenbrand
2023-03-02 14:47     ` Marcelo Tosatti
2023-03-02 16:20       ` Peter Xu
2023-03-02 19:11         ` Marcelo Tosatti
2023-03-02 20:06           ` Peter Xu
2023-02-09 15:01 ` [PATCH v2 09/11] mm/vmstat: use cmpxchg loop in cpu_vm_stats_fold Marcelo Tosatti
2023-03-01 22:57   ` Peter Xu
2023-03-02 13:55     ` Marcelo Tosatti
2023-03-02 21:19       ` Peter Xu
2023-03-03 15:17         ` Marcelo Tosatti
2023-02-09 15:02 ` [PATCH v2 10/11] mm/vmstat: switch vmstat shepherd to flush per-CPU counters remotely Marcelo Tosatti
2023-03-02 21:01   ` Peter Xu
2023-03-02 21:16     ` Marcelo Tosatti
2023-03-02 21:30       ` Peter Xu
2023-02-09 15:02 ` [PATCH v2 11/11] mm/vmstat: refresh stats remotely instead of via work item Marcelo Tosatti
2023-02-23 14:54 ` [PATCH v2 00/11] fold per-CPU vmstats remotely Marcelo Tosatti
2023-02-24  2:34   ` Hillf Danton
2023-02-27 19:41     ` Marcelo Tosatti

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZBL1ZToOqV5aKX49@tpad \
    --to=mtosatti@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=atomlin@atomlin.com \
    --cc=cl@gentwo.de \
    --cc=frederic@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).