public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Marcelo Tosatti <mtosatti@redhat.com>
To: Leonardo Bras <leobras@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>, linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH 2/4] swap: apply new local_schedule_work_on() interface
Date: Tue, 8 Aug 2023 16:39:29 -0300	[thread overview]
Message-ID: <ZNKZ8Vc/LE9lVAjN@tpad> (raw)
In-Reply-To: <20230729083737.38699-4-leobras@redhat.com>

On Sat, Jul 29, 2023 at 05:37:33AM -0300, Leonardo Bras wrote:
> Make use of the new local_*lock_n*() and local_schedule_work_on() interface
> to improve performance & latency on PREEMTP_RT kernels.
> 
> For functions that may be scheduled in a different cpu, replace
> local_*lock*() by local_lock_n*(), and replace schedule_work_on() by
> local_schedule_work_on(). The same happens for flush_work() and
> local_flush_work().
> 
> This should bring no relevant performance impact on non-RT kernels:
> For functions that may be scheduled in a different cpu, the local_*lock's
> this_cpu_ptr() becomes a per_cpu_ptr(smp_processor_id()).
> 
> Signed-off-by: Leonardo Bras <leobras@redhat.com>
> ---
>  mm/swap.c | 18 +++++++++---------
>  1 file changed, 9 insertions(+), 9 deletions(-)

Leo,

I think the interruptions should rather be removed for both
CONFIG_PREEMPT_RT AND !CONFIG_PREEMPT_RT.

The impact of grabbing locks must be properly analyzed and not
"rejected blindly".

Example:

commit 01b44456a7aa7c3b24fa9db7d1714b208b8ef3d8
Author: Mel Gorman <mgorman@techsingularity.net>
Date:   Fri Jun 24 13:54:23 2022 +0100

    mm/page_alloc: replace local_lock with normal spinlock
    
    struct per_cpu_pages is no longer strictly local as PCP lists can be
    drained remotely using a lock for protection.  While the use of local_lock
    works, it goes against the intent of local_lock which is for "pure CPU
    local concurrency control mechanisms and not suited for inter-CPU
    concurrency control" (Documentation/locking/locktypes.rst)
    
    local_lock protects against migration between when the percpu pointer is
    accessed and the pcp->lock acquired.  The lock acquisition is a preemption
    point so in the worst case, a task could migrate to another NUMA node and
    accidentally allocate remote memory.  The main requirement is to pin the
    task to a CPU that is suitable for PREEMPT_RT and !PREEMPT_RT.


  reply	other threads:[~2023-08-10 21:32 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-29  8:37 [RFC PATCH 0/4] Improve RT performance & latency w/ local_lock_t() Leonardo Bras
2023-07-29  8:37 ` [RFC PATCH 1/4] Introducing local_lock_n() and local queue & flush Leonardo Bras
2023-07-29  8:37 ` [RFC PATCH 2/4] swap: apply new local_schedule_work_on() interface Leonardo Bras
2023-08-08 19:39   ` Marcelo Tosatti [this message]
2023-08-29  0:07     ` Leonardo Brás
2023-07-29  8:37 ` [RFC PATCH 3/4] memcontrol: " Leonardo Bras
2023-07-29  8:37 ` [RFC PATCH 4/4] slub: " Leonardo Bras
2023-08-01 19:05 ` [RFC PATCH 0/4] Improve RT performance & latency w/ local_lock_t() Leonardo Bras Soares Passos

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZNKZ8Vc/LE9lVAjN@tpad \
    --to=mtosatti@redhat.com \
    --cc=leobras@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox