linux-arch.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
To: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Andy Lutomirski <luto@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-mm <linux-mm@kvack.org>,
	Nicholas Piggin <npiggin@gmail.com>,
	Anton Blanchard <anton@ozlabs.org>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Paul Mackerras <paulus@ozlabs.org>,
	Randy Dunlap <rdunlap@infradead.org>,
	linux-arch <linux-arch@vger.kernel.org>, x86 <x86@kernel.org>,
	riel <riel@surriel.com>, Dave Hansen <dave.hansen@intel.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Nadav Amit <nadav.amit@gmail.com>
Subject: Re: [PATCH 09/23] membarrier: Fix incorrect barrier positions during exec and kthread_use_mm()
Date: Wed, 12 Jan 2022 12:08:27 -0500 (EST)	[thread overview]
Message-ID: <883836409.24887.1642007307554.JavaMail.zimbra@efficios.com> (raw)
In-Reply-To: <762743530.24791.1642005040611.JavaMail.zimbra@efficios.com>


----- Mathieu Desnoyers <mathieu.desnoyers@efficios.com> wrote:
> ----- On Jan 8, 2022, at 11:43 AM, Andy Lutomirski luto@kernel.org wrote:
> 
> > membarrier() requires a barrier before changes to rq->curr->mm, not just
> > before writes to rq->membarrier_state.  Move the barrier in exec_mmap() to
> > the right place.
> 
> I don't see anything that was technically wrong with membarrier_exec_mmap
> before this patchset. membarrier_exec-mmap issued a smp_mb just after
> the task_lock(), and proceeded to clear the mm->membarrier_state and
> runqueue membarrier state. And then the tsk->mm is set *after* the smp_mb().
> 
> So from this commit message we could be led to think there was something
> wrong before, but I do not think it's true. This first part of the proposed
> change is merely a performance optimization that removes a useless memory
> barrier on architectures where smp_mb__after_spinlock() is a no-op, and
> removes a useless store to mm->membarrier_state because it is already
> zero-initialized. This is all very nice, but does not belong in a "Fix" patch.
> 
> > Add the barrier in kthread_use_mm() -- it was entirely
> > missing before.
> 
> This is correct. This second part of the patch is indeed a relevant fix.

However this adds a useless barrier for CONFIG_MEMBARRIER=n.

Thanks,

Mathieu


> 
> Thanks,
> 
> Mathieu
> 
> > 
> > This patch makes exec_mmap() and kthread_use_mm() use the same membarrier
> > hooks, which results in some code deletion.
> > 
> > As an added bonus, this will eliminate a redundant barrier in execve() on
> > arches for which spinlock acquisition is a barrier.
> > 
> > Signed-off-by: Andy Lutomirski <luto@kernel.org>
> > ---
> > fs/exec.c                 |  6 +++++-
> > include/linux/sched/mm.h  |  2 --
> > kernel/kthread.c          |  5 +++++
> > kernel/sched/membarrier.c | 15 ---------------
> > 4 files changed, 10 insertions(+), 18 deletions(-)
> > 
> > diff --git a/fs/exec.c b/fs/exec.c
> > index 38b05e01c5bd..325dab98bc51 100644
> > --- a/fs/exec.c
> > +++ b/fs/exec.c
> > @@ -1001,12 +1001,16 @@ static int exec_mmap(struct mm_struct *mm)
> > 	}
> > 
> > 	task_lock(tsk);
> > -	membarrier_exec_mmap(mm);
> > +	/*
> > +	 * membarrier() requires a full barrier before switching mm.
> > +	 */
> > +	smp_mb__after_spinlock();
> > 
> > 	local_irq_disable();
> > 	active_mm = tsk->active_mm;
> > 	tsk->active_mm = mm;
> > 	WRITE_ONCE(tsk->mm, mm);  /* membarrier reads this without locks */
> > +	membarrier_update_current_mm(mm);
> > 	/*
> > 	 * This prevents preemption while active_mm is being loaded and
> > 	 * it and mm are being updated, which could cause problems for
> > diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
> > index e107f292fc42..f1d2beac464c 100644
> > --- a/include/linux/sched/mm.h
> > +++ b/include/linux/sched/mm.h
> > @@ -344,8 +344,6 @@ enum {
> > #include <asm/membarrier.h>
> > #endif
> > 
> > -extern void membarrier_exec_mmap(struct mm_struct *mm);
> > -
> > extern void membarrier_update_current_mm(struct mm_struct *next_mm);
> > 
> > /*
> > diff --git a/kernel/kthread.c b/kernel/kthread.c
> > index 3b18329f885c..18b0a2e0e3b2 100644
> > --- a/kernel/kthread.c
> > +++ b/kernel/kthread.c
> > @@ -1351,6 +1351,11 @@ void kthread_use_mm(struct mm_struct *mm)
> > 	WARN_ON_ONCE(tsk->mm);
> > 
> > 	task_lock(tsk);
> > +	/*
> > +	 * membarrier() requires a full barrier before switching mm.
> > +	 */
> > +	smp_mb__after_spinlock();
> > +
> > 	/* Hold off tlb flush IPIs while switching mm's */
> > 	local_irq_disable();
> > 	active_mm = tsk->active_mm;
> > diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c
> > index c38014c2ed66..44fafa6e1efd 100644
> > --- a/kernel/sched/membarrier.c
> > +++ b/kernel/sched/membarrier.c
> > @@ -277,21 +277,6 @@ static void ipi_sync_rq_state(void *info)
> > 	smp_mb();
> > }
> > 
> > -void membarrier_exec_mmap(struct mm_struct *mm)
> > -{
> > -	/*
> > -	 * Issue a memory barrier before clearing membarrier_state to
> > -	 * guarantee that no memory access prior to exec is reordered after
> > -	 * clearing this state.
> > -	 */
> > -	smp_mb();
> > -	/*
> > -	 * Keep the runqueue membarrier_state in sync with this mm
> > -	 * membarrier_state.
> > -	 */
> > -	this_cpu_write(runqueues.membarrier_state, 0);
> > -}
> > -
> > void membarrier_update_current_mm(struct mm_struct *next_mm)
> > {
> > 	struct rq *rq = this_rq();
> > --
> > 2.33.1
> 
> -- 
> Mathieu Desnoyers
> EfficiOS Inc.
> http://www.efficios.com

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

  reply	other threads:[~2022-01-12 17:08 UTC|newest]

Thread overview: 71+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-08 16:43 [PATCH 00/23] mm, sched: Rework lazy mm handling Andy Lutomirski
2022-01-08 16:43 ` [PATCH 01/23] membarrier: Document why membarrier() works Andy Lutomirski
2022-01-12 15:30   ` Mathieu Desnoyers
2022-01-08 16:43 ` [PATCH 02/23] x86/mm: Handle unlazying membarrier core sync in the arch code Andy Lutomirski
2022-01-12 15:40   ` Mathieu Desnoyers
2022-01-08 16:43 ` [PATCH 03/23] membarrier: Remove membarrier_arch_switch_mm() prototype in core code Andy Lutomirski
2022-01-08 16:43 ` [PATCH 04/23] membarrier: Make the post-switch-mm barrier explicit Andy Lutomirski
2022-01-12 15:52   ` Mathieu Desnoyers
2022-01-08 16:43 ` [PATCH 05/23] membarrier, kthread: Use _ONCE accessors for task->mm Andy Lutomirski
2022-01-12 15:55   ` Mathieu Desnoyers
2022-01-08 16:43 ` [PATCH 06/23] powerpc/membarrier: Remove special barrier on mm switch Andy Lutomirski
2022-01-10  8:42   ` Christophe Leroy
2022-01-12 15:57   ` Mathieu Desnoyers
2022-01-08 16:43 ` [PATCH 07/23] membarrier: Rewrite sync_core_before_usermode() and improve documentation Andy Lutomirski
2022-01-12 16:11   ` Mathieu Desnoyers
2022-01-08 16:43 ` [PATCH 08/23] membarrier: Remove redundant clear of mm->membarrier_state in exec_mmap() Andy Lutomirski
2022-01-12 16:13   ` Mathieu Desnoyers
2022-01-08 16:43 ` [PATCH 09/23] membarrier: Fix incorrect barrier positions during exec and kthread_use_mm() Andy Lutomirski
2022-01-12 16:30   ` Mathieu Desnoyers
2022-01-12 17:08     ` Mathieu Desnoyers [this message]
2022-01-08 16:43 ` [PATCH 10/23] x86/events, x86/insn-eval: Remove incorrect active_mm references Andy Lutomirski
2022-01-08 16:43 ` [PATCH 11/23] sched/scs: Initialize shadow stack on idle thread bringup, not shutdown Andy Lutomirski
2022-01-10 22:06   ` Sami Tolvanen
2022-01-08 16:43 ` [PATCH 12/23] Rework "sched/core: Fix illegal RCU from offline CPUs" Andy Lutomirski
2022-01-08 16:43 ` [PATCH 13/23] exec: Remove unnecessary vmacache_seqnum clear in exec_mmap() Andy Lutomirski
2022-01-08 16:43 ` [PATCH 14/23] sched, exec: Factor current mm changes out from exec Andy Lutomirski
2022-01-08 16:44 ` [PATCH 15/23] kthread: Switch to __change_current_mm() Andy Lutomirski
2022-01-08 16:44 ` [PATCH 16/23] sched: Use lightweight hazard pointers to grab lazy mms Andy Lutomirski
2022-01-08 19:22   ` Linus Torvalds
2022-01-08 22:04     ` Andy Lutomirski
2022-01-09  0:53       ` Linus Torvalds
2022-01-09  3:58         ` Andy Lutomirski
2022-01-09  4:38           ` Linus Torvalds
2022-01-09 20:19             ` Andy Lutomirski
2022-01-09 20:48               ` Linus Torvalds
2022-01-09 21:51                 ` Linus Torvalds
2022-01-10  0:52                   ` Andy Lutomirski
2022-01-10  2:36                     ` Rik van Riel
2022-01-10  3:51                       ` Linus Torvalds
2022-01-10  4:56                   ` Nicholas Piggin
2022-01-10  5:17                     ` Nicholas Piggin
2022-01-10 17:19                       ` Linus Torvalds
2022-01-11  2:24                         ` Nicholas Piggin
2022-01-10 20:52                     ` Andy Lutomirski
2022-01-11  3:10                       ` Nicholas Piggin
2022-01-11 15:39                         ` Andy Lutomirski
2022-01-11 22:48                           ` Nicholas Piggin
2022-01-12  0:42                             ` Nicholas Piggin
2022-01-11 10:39                 ` Will Deacon
2022-01-11 15:22                   ` Andy Lutomirski
2022-01-09  5:56   ` Nadav Amit
2022-01-09  6:48     ` Linus Torvalds
2022-01-09  8:49       ` Nadav Amit
2022-01-09 19:10         ` Linus Torvalds
2022-01-09 19:52           ` Andy Lutomirski
2022-01-09 20:00             ` Linus Torvalds
2022-01-09 20:34             ` Nadav Amit
2022-01-09 20:48               ` Andy Lutomirski
2022-01-09 19:22         ` Rik van Riel
2022-01-09 19:34           ` Nadav Amit
2022-01-09 19:37             ` Rik van Riel
2022-01-09 19:51               ` Nadav Amit
2022-01-09 19:54                 ` Linus Torvalds
2022-01-08 16:44 ` [PATCH 17/23] x86/mm: Make use/unuse_temporary_mm() non-static Andy Lutomirski
2022-01-08 16:44 ` [PATCH 18/23] x86/mm: Allow temporary mms when IRQs are on Andy Lutomirski
2022-01-08 16:44 ` [PATCH 19/23] x86/efi: Make efi_enter/leave_mm use the temporary_mm machinery Andy Lutomirski
2022-01-10 13:13   ` Ard Biesheuvel
2022-01-08 16:44 ` [PATCH 20/23] x86/mm: Remove leave_mm() in favor of unlazy_mm_irqs_off() Andy Lutomirski
2022-01-08 16:44 ` [PATCH 21/23] x86/mm: Use unlazy_mm_irqs_off() in TLB flush IPIs Andy Lutomirski
2022-01-08 16:44 ` [PATCH 22/23] x86/mm: Optimize for_each_possible_lazymm_cpu() Andy Lutomirski
2022-01-08 16:44 ` [PATCH 23/23] x86/mm: Opt in to IRQs-off activate_mm() Andy Lutomirski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=883836409.24887.1642007307554.JavaMail.zimbra@efficios.com \
    --to=mathieu.desnoyers@efficios.com \
    --cc=akpm@linux-foundation.org \
    --cc=anton@ozlabs.org \
    --cc=benh@kernel.crashing.org \
    --cc=dave.hansen@intel.com \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=nadav.amit@gmail.com \
    --cc=npiggin@gmail.com \
    --cc=paulus@ozlabs.org \
    --cc=peterz@infradead.org \
    --cc=rdunlap@infradead.org \
    --cc=riel@surriel.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).