public inbox for linux-arch@vger.kernel.org
 help / color / mirror / Atom feed
  • * [RFC PATCH for 4.15 06/10] Fix: x86: Add missing core serializing instruction on migration
           [not found] <20171110213717.12457-1-mathieu.desnoyers@efficios.com>
           [not found] ` <20171110213717.12457-1-mathieu.desnoyers-vg+e7yoeK/dWk0Htik3J/w@public.gmane.org>
    @ 2017-11-10 21:37 ` Mathieu Desnoyers
      2017-11-10 21:37   ` Mathieu Desnoyers
      2017-11-10 21:37 ` [RFC PATCH v2 for 4.15 07/10] membarrier: x86: Provide core serializing command Mathieu Desnoyers
                       ` (2 subsequent siblings)
      4 siblings, 1 reply; 28+ messages in thread
    From: Mathieu Desnoyers @ 2017-11-10 21:37 UTC (permalink / raw)
      To: Boqun Feng, Peter Zijlstra, Paul E . McKenney
      Cc: linux-kernel, linux-api, Andy Lutomirski, Andrew Hunter,
    	Maged Michael, Avi Kivity, Benjamin Herrenschmidt, Paul Mackerras,
    	Michael Ellerman, Dave Watson, Thomas Gleixner, Ingo Molnar,
    	H . Peter Anvin, Andrea Parri, Russell King, Greg Hackmann,
    	Will Deacon, David Sehr, Linus Torvalds, x86, Mathieu Desnoyers
    
    x86 has a missing core serializing instruction in migration scenarios.
    
    Given that x86-32 can return to user-space with sysexit, and x86-64
    through sysretq and sysretl, which are not core serializing, the
    following user-space self-modifiying code (JIT) scenario can occur:
    
         CPU 0                      CPU 1
    
    User-space self-modify code
    Preempted
    migrated              ->
                                    scheduler selects task
                                    Return to user-space (iret or sysexit)
                                    User-space issues sync_core()
                          <-        migrated
    scheduler selects task
    Return to user-space (sysexit)
    jump to modified code
    Run modified code without sync_core() -> bug.
    
    This migration pattern can return to user-space through sysexit,
    sysretl, or sysretq, which are not core serializing, and therefore
    breaks sequential consistency expectations from a single-threaded
    process.
    
    Fix this issue by invoking sync_core_before_usermode() the first
    time a runqueue finishes a task switch after receiving a migrated
    thread.
    
    Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
    CC: Peter Zijlstra <peterz@infradead.org>
    CC: Andy Lutomirski <luto@kernel.org>
    CC: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
    CC: Boqun Feng <boqun.feng@gmail.com>
    CC: Andrew Hunter <ahh@google.com>
    CC: Maged Michael <maged.michael@gmail.com>
    CC: Avi Kivity <avi@scylladb.com>
    CC: Benjamin Herrenschmidt <benh@kernel.crashing.org>
    CC: Paul Mackerras <paulus@samba.org>
    CC: Michael Ellerman <mpe@ellerman.id.au>
    CC: Dave Watson <davejwatson@fb.com>
    CC: Thomas Gleixner <tglx@linutronix.de>
    CC: Ingo Molnar <mingo@redhat.com>
    CC: "H. Peter Anvin" <hpa@zytor.com>
    CC: Andrea Parri <parri.andrea@gmail.com>
    CC: Russell King <linux@armlinux.org.uk>
    CC: Greg Hackmann <ghackmann@google.com>
    CC: Will Deacon <will.deacon@arm.com>
    CC: David Sehr <sehr@google.com>
    CC: Linus Torvalds <torvalds@linux-foundation.org>
    CC: x86@kernel.org
    CC: linux-arch@vger.kernel.org
    ---
     kernel/sched/core.c  | 7 +++++++
     kernel/sched/sched.h | 1 +
     2 files changed, 8 insertions(+)
    
    diff --git a/kernel/sched/core.c b/kernel/sched/core.c
    index c79e94278613..4a1c9782267a 100644
    --- a/kernel/sched/core.c
    +++ b/kernel/sched/core.c
    @@ -927,6 +927,7 @@ static struct rq *move_queued_task(struct rq *rq, struct rq_flags *rf,
     
     	rq_lock(rq, rf);
     	BUG_ON(task_cpu(p) != new_cpu);
    +	rq->need_sync_core = 1;
     	enqueue_task(rq, p, 0);
     	p->on_rq = TASK_ON_RQ_QUEUED;
     	check_preempt_curr(rq, p, 0);
    @@ -2684,6 +2685,12 @@ static struct rq *finish_task_switch(struct task_struct *prev)
     	prev_state = prev->state;
     	vtime_task_switch(prev);
     	perf_event_task_sched_in(prev, current);
    +#ifdef CONFIG_SMP
    +	if (unlikely(rq->need_sync_core)) {
    +		sync_core_before_usermode();
    +		rq->need_sync_core = 0;
    +	}
    +#endif
     	finish_lock_switch(rq, prev);
     	finish_arch_post_lock_switch();
     
    diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
    index cab256c1720a..33e617bc491c 100644
    --- a/kernel/sched/sched.h
    +++ b/kernel/sched/sched.h
    @@ -734,6 +734,7 @@ struct rq {
     	/* For active balancing */
     	int active_balance;
     	int push_cpu;
    +	int need_sync_core;
     	struct cpu_stop_work active_balance_work;
     	/* cpu of this runqueue: */
     	int cpu;
    -- 
    2.11.0
    
    ^ permalink raw reply related	[flat|nested] 28+ messages in thread
  • * [RFC PATCH v2 for 4.15 07/10] membarrier: x86: Provide core serializing command
           [not found] <20171110213717.12457-1-mathieu.desnoyers@efficios.com>
           [not found] ` <20171110213717.12457-1-mathieu.desnoyers-vg+e7yoeK/dWk0Htik3J/w@public.gmane.org>
      2017-11-10 21:37 ` [RFC PATCH for 4.15 06/10] Fix: x86: Add missing core serializing instruction on migration Mathieu Desnoyers
    @ 2017-11-10 21:37 ` Mathieu Desnoyers
      2017-11-10 21:37   ` Mathieu Desnoyers
      2017-11-10 21:37 ` [RFC PATCH for 4.15 08/10] membarrier: selftest: Test private expedited sync core cmd Mathieu Desnoyers
      2017-11-10 21:37 ` [RFC PATCH for 4.15 10/10] membarrier: selftest: Test shared expedited cmd Mathieu Desnoyers
      4 siblings, 1 reply; 28+ messages in thread
    From: Mathieu Desnoyers @ 2017-11-10 21:37 UTC (permalink / raw)
      To: Boqun Feng, Peter Zijlstra, Paul E . McKenney
      Cc: linux-kernel, linux-api, Andy Lutomirski, Andrew Hunter,
    	Maged Michael, Avi Kivity, Benjamin Herrenschmidt, Paul Mackerras,
    	Michael Ellerman, Dave Watson, Thomas Gleixner, Ingo Molnar,
    	H . Peter Anvin, Andrea Parri, Russell King, Greg Hackmann,
    	Will Deacon, David Sehr, Linus Torvalds, x86, Mathieu Desnoyers
    
    There are two places where core serialization is needed by membarrier:
    
    1) When returning from the membarrier IPI,
    2) After scheduler updates curr to a thread with a different mm, before
       going back to user-space, since the curr->mm is used by membarrier to
       check whether it needs to send an IPI to that CPU.
    
    x86-32 uses iret as return from interrupt, and both iret and sysexit to go
    back to user-space. The iret instruction is core serializing, but not
    sysexit.
    
    x86-64 uses iret as return from interrupt, which takes care of the IPI.
    However, it can return to user-space through either sysretl (compat
    code), sysretq, or iret. Given that sysret{l,q} is not core serializing,
    we rely instead on write_cr3() performed by switch_mm() to provide core
    serialization after changing the current mm, and deal with the special
    case of kthread -> uthread (temporarily keeping current mm into
    active_mm) by adding a sync_core_before_usermode() in that specific case.
    
    Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
    CC: Peter Zijlstra <peterz@infradead.org>
    CC: Andy Lutomirski <luto@kernel.org>
    CC: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
    CC: Boqun Feng <boqun.feng@gmail.com>
    CC: Andrew Hunter <ahh@google.com>
    CC: Maged Michael <maged.michael@gmail.com>
    CC: Avi Kivity <avi@scylladb.com>
    CC: Benjamin Herrenschmidt <benh@kernel.crashing.org>
    CC: Paul Mackerras <paulus@samba.org>
    CC: Michael Ellerman <mpe@ellerman.id.au>
    CC: Dave Watson <davejwatson@fb.com>
    CC: Thomas Gleixner <tglx@linutronix.de>
    CC: Ingo Molnar <mingo@redhat.com>
    CC: "H. Peter Anvin" <hpa@zytor.com>
    CC: Andrea Parri <parri.andrea@gmail.com>
    CC: Russell King <linux@armlinux.org.uk>
    CC: Greg Hackmann <ghackmann@google.com>
    CC: Will Deacon <will.deacon@arm.com>
    CC: David Sehr <sehr@google.com>
    CC: x86@kernel.org
    CC: linux-arch@vger.kernel.org
    
    ---
    Changes since v1:
    - Use the newly introduced sync_core_before_usermode(). Move all state
      handling to generic code.
    ---
     arch/x86/Kconfig          |  1 +
     arch/x86/entry/entry_32.S |  5 +++++
     arch/x86/entry/entry_64.S |  8 ++++++++
     arch/x86/mm/tlb.c         |  7 ++++---
     include/linux/sched/mm.h  | 12 ++++++++++++
     kernel/sched/core.c       |  6 +++++-
     kernel/sched/membarrier.c |  4 ++++
     7 files changed, 39 insertions(+), 4 deletions(-)
    
    diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
    index 54fbb8960d94..94bdf5fc7d94 100644
    --- a/arch/x86/Kconfig
    +++ b/arch/x86/Kconfig
    @@ -54,6 +54,7 @@ config X86
     	select ARCH_HAS_FORTIFY_SOURCE
     	select ARCH_HAS_GCOV_PROFILE_ALL
     	select ARCH_HAS_KCOV			if X86_64
    +	select ARCH_HAS_MEMBARRIER_SYNC_CORE
     	select ARCH_HAS_PMEM_API		if X86_64
     	# Causing hangs/crashes, see the commit that added this change for details.
     	select ARCH_HAS_REFCOUNT		if BROKEN
    diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
    index 4838037f97f6..04e5daba8456 100644
    --- a/arch/x86/entry/entry_32.S
    +++ b/arch/x86/entry/entry_32.S
    @@ -553,6 +553,11 @@ restore_all:
     .Lrestore_nocheck:
     	RESTORE_REGS 4				# skip orig_eax/error_code
     .Lirq_return:
    +	/*
    +	 * ARCH_HAS_MEMBARRIER_SYNC_CORE rely on iret core serialization
    +	 * when returning from IPI handler and when returning from
    +	 * scheduler to user-space.
    +	 */
     	INTERRUPT_RETURN
     
     .section .fixup, "ax"
    diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
    index bcfc5668dcb2..4859f04e1695 100644
    --- a/arch/x86/entry/entry_64.S
    +++ b/arch/x86/entry/entry_64.S
    @@ -642,6 +642,10 @@ GLOBAL(restore_regs_and_iret)
     restore_c_regs_and_iret:
     	RESTORE_C_REGS
     	REMOVE_PT_GPREGS_FROM_STACK 8
    +	/*
    +	 * ARCH_HAS_MEMBARRIER_SYNC_CORE rely on iret core serialization
    +	 * when returning from IPI handler.
    +	 */
     	INTERRUPT_RETURN
     
     ENTRY(native_iret)
    @@ -1122,6 +1126,10 @@ paranoid_exit_restore:
     	RESTORE_EXTRA_REGS
     	RESTORE_C_REGS
     	REMOVE_PT_GPREGS_FROM_STACK 8
    +	/*
    +	 * ARCH_HAS_MEMBARRIER_SYNC_CORE rely on iret core serialization
    +	 * when returning from IPI handler.
    +	 */
     	INTERRUPT_RETURN
     END(paranoid_exit)
     
    diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
    index 5abf9bfcca1f..3b13d6735fa5 100644
    --- a/arch/x86/mm/tlb.c
    +++ b/arch/x86/mm/tlb.c
    @@ -147,9 +147,10 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
     	this_cpu_write(cpu_tlbstate.is_lazy, false);
     
     	/*
    -	 * The membarrier system call requires a full memory barrier
    -	 * before returning to user-space, after storing to rq->curr.
    -	 * Writing to CR3 provides that full memory barrier.
    +	 * The membarrier system call requires a full memory barrier and
    +	 * core serialization before returning to user-space, after
    +	 * storing to rq->curr. Writing to CR3 provides that full
    +	 * memory barrier and core serializing instruction.
     	 */
     	if (real_prev == next) {
     		VM_WARN_ON(this_cpu_read(cpu_tlbstate.ctxs[prev_asid].ctx_id) !=
    diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
    index 49a5b37a215a..6d7399a9185c 100644
    --- a/include/linux/sched/mm.h
    +++ b/include/linux/sched/mm.h
    @@ -222,6 +222,7 @@ enum {
     	MEMBARRIER_STATE_PRIVATE_EXPEDITED_READY		= (1U << 0),
     	MEMBARRIER_STATE_PRIVATE_EXPEDITED			= (1U << 1),
     	MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE_READY	= (1U << 2),
    +	MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE		= (1U << 3),
     };
     
     enum {
    @@ -232,6 +233,14 @@ enum {
     #include <asm/membarrier.h>
     #endif
     
    +static inline void membarrier_mm_sync_core_before_usermode(struct mm_struct *mm)
    +{
    +	if (likely(!(atomic_read(&mm->membarrier_state) &
    +			MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE)))
    +		return;
    +	sync_core_before_usermode();
    +}
    +
     static inline void membarrier_execve(struct task_struct *t)
     {
     	atomic_set(&t->mm->membarrier_state, 0);
    @@ -246,6 +255,9 @@ static inline void membarrier_arch_switch_mm(struct mm_struct *prev,
     static inline void membarrier_execve(struct task_struct *t)
     {
     }
    +static inline void membarrier_mm_sync_core_before_usermode(struct mm_struct *mm)
    +{
    +}
     #endif
     
     #endif /* _LINUX_SCHED_MM_H */
    diff --git a/kernel/sched/core.c b/kernel/sched/core.c
    index 4a1c9782267a..c3b8248c684d 100644
    --- a/kernel/sched/core.c
    +++ b/kernel/sched/core.c
    @@ -2700,9 +2700,13 @@ static struct rq *finish_task_switch(struct task_struct *prev)
     	 * thread, mmdrop()'s implicit full barrier is required by the
     	 * membarrier system call, because the current active_mm can
     	 * become the current mm without going through switch_mm().
    +	 * membarrier also requires a core serializing instruction
    +	 * before going back to user-space after storing to rq->curr.
     	 */
    -	if (mm)
    +	if (mm) {
     		mmdrop(mm);
    +		membarrier_mm_sync_core_before_usermode(mm);
    +	}
     	if (unlikely(prev_state == TASK_DEAD)) {
     		if (prev->sched_class->task_dead)
     			prev->sched_class->task_dead(prev);
    diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c
    index 82efa7c64902..c240158138ee 100644
    --- a/kernel/sched/membarrier.c
    +++ b/kernel/sched/membarrier.c
    @@ -141,6 +141,10 @@ static int membarrier_register_private_expedited(int flags)
     		return 0;
     	atomic_or(MEMBARRIER_STATE_PRIVATE_EXPEDITED,
     			&mm->membarrier_state);
    +	if (flags & MEMBARRIER_FLAG_SYNC_CORE) {
    +		atomic_or(MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE,
    +				&mm->membarrier_state);
    +	}
     	if (!(atomic_read(&mm->mm_users) == 1 && get_nr_threads(p) == 1)) {
     		/*
     		 * Ensure all future scheduler executions will observe the
    -- 
    2.11.0
    
    ^ permalink raw reply related	[flat|nested] 28+ messages in thread
  • * [RFC PATCH for 4.15 08/10] membarrier: selftest: Test private expedited sync core cmd
           [not found] <20171110213717.12457-1-mathieu.desnoyers@efficios.com>
                       ` (2 preceding siblings ...)
      2017-11-10 21:37 ` [RFC PATCH v2 for 4.15 07/10] membarrier: x86: Provide core serializing command Mathieu Desnoyers
    @ 2017-11-10 21:37 ` Mathieu Desnoyers
      2017-11-10 21:37   ` Mathieu Desnoyers
      2017-11-10 21:37 ` [RFC PATCH for 4.15 10/10] membarrier: selftest: Test shared expedited cmd Mathieu Desnoyers
      4 siblings, 1 reply; 28+ messages in thread
    From: Mathieu Desnoyers @ 2017-11-10 21:37 UTC (permalink / raw)
      To: Boqun Feng, Peter Zijlstra, Paul E . McKenney
      Cc: linux-kernel, linux-api, Andy Lutomirski, Andrew Hunter,
    	Maged Michael, Avi Kivity, Benjamin Herrenschmidt, Paul Mackerras,
    	Michael Ellerman, Dave Watson, Thomas Gleixner, Ingo Molnar,
    	H . Peter Anvin, Andrea Parri, Russell King, Greg Hackmann,
    	Will Deacon, David Sehr, Linus Torvalds, x86, Mathieu Desnoyers
    
    Test the new MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE and
    MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE commands.
    
    Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
    CC: Shuah Khan <shuahkh@osg.samsung.com>
    CC: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    CC: Peter Zijlstra <peterz@infradead.org>
    CC: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
    CC: Boqun Feng <boqun.feng@gmail.com>
    CC: Andrew Hunter <ahh@google.com>
    CC: Maged Michael <maged.michael@gmail.com>
    CC: Avi Kivity <avi@scylladb.com>
    CC: Benjamin Herrenschmidt <benh@kernel.crashing.org>
    CC: Paul Mackerras <paulus@samba.org>
    CC: Michael Ellerman <mpe@ellerman.id.au>
    CC: Dave Watson <davejwatson@fb.com>
    CC: Alan Stern <stern@rowland.harvard.edu>
    CC: Will Deacon <will.deacon@arm.com>
    CC: Andy Lutomirski <luto@kernel.org>
    CC: Alice Ferrazzi <alice.ferrazzi@gmail.com>
    CC: Paul Elder <paul.elder@pitt.edu>
    CC: linux-kselftest@vger.kernel.org
    CC: linux-arch@vger.kernel.org
    ---
     .../testing/selftests/membarrier/membarrier_test.c | 77 +++++++++++++++++++++-
     1 file changed, 76 insertions(+), 1 deletion(-)
    
    diff --git a/tools/testing/selftests/membarrier/membarrier_test.c b/tools/testing/selftests/membarrier/membarrier_test.c
    index d7543a6d9030..a0eae8d51e72 100644
    --- a/tools/testing/selftests/membarrier/membarrier_test.c
    +++ b/tools/testing/selftests/membarrier/membarrier_test.c
    @@ -132,6 +132,63 @@ static int test_membarrier_private_expedited_success(void)
     	return 0;
     }
     
    +static int test_membarrier_private_expedited_sync_core_fail(void)
    +{
    +	int cmd = MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE, flags = 0;
    +	const char *test_name = "sys membarrier MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE not registered failure";
    +
    +	if (sys_membarrier(cmd, flags) != -1) {
    +		ksft_exit_fail_msg(
    +			"%s test: flags = %d. Should fail, but passed\n",
    +			test_name, flags);
    +	}
    +	if (errno != EPERM) {
    +		ksft_exit_fail_msg(
    +			"%s test: flags = %d. Should return (%d: \"%s\"), but returned (%d: \"%s\").\n",
    +			test_name, flags, EPERM, strerror(EPERM),
    +			errno, strerror(errno));
    +	}
    +
    +	ksft_test_result_pass(
    +		"%s test: flags = %d, errno = %d\n",
    +		test_name, flags, errno);
    +	return 0;
    +}
    +
    +static int test_membarrier_register_private_expedited_sync_core_success(void)
    +{
    +	int cmd = MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE, flags = 0;
    +	const char *test_name = "sys membarrier MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE";
    +
    +	if (sys_membarrier(cmd, flags) != 0) {
    +		ksft_exit_fail_msg(
    +			"%s test: flags = %d, errno = %d\n",
    +			test_name, flags, errno);
    +	}
    +
    +	ksft_test_result_pass(
    +		"%s test: flags = %d\n",
    +		test_name, flags);
    +	return 0;
    +}
    +
    +static int test_membarrier_private_expedited_sync_core_success(void)
    +{
    +	int cmd = MEMBARRIER_CMD_PRIVATE_EXPEDITED, flags = 0;
    +	const char *test_name = "sys membarrier MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE";
    +
    +	if (sys_membarrier(cmd, flags) != 0) {
    +		ksft_exit_fail_msg(
    +			"%s test: flags = %d, errno = %d\n",
    +			test_name, flags, errno);
    +	}
    +
    +	ksft_test_result_pass(
    +		"%s test: flags = %d\n",
    +		test_name, flags);
    +	return 0;
    +}
    +
     static int test_membarrier(void)
     {
     	int status;
    @@ -154,6 +211,22 @@ static int test_membarrier(void)
     	status = test_membarrier_private_expedited_success();
     	if (status)
     		return status;
    +	status = sys_membarrier(MEMBARRIER_CMD_QUERY, 0);
    +	if (status < 0) {
    +		ksft_test_result_fail("sys_membarrier() failed\n");
    +		return status;
    +	}
    +	if (status & MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE) {
    +		status = test_membarrier_private_expedited_sync_core_fail();
    +		if (status)
    +			return status;
    +		status = test_membarrier_register_private_expedited_sync_core_success();
    +		if (status)
    +			return status;
    +		status = test_membarrier_private_expedited_sync_core_success();
    +		if (status)
    +			return status;
    +	}
     	return 0;
     }
     
    @@ -173,8 +246,10 @@ static int test_membarrier_query(void)
     		}
     		ksft_exit_fail_msg("sys_membarrier() failed\n");
     	}
    -	if (!(ret & MEMBARRIER_CMD_SHARED))
    +	if (!(ret & MEMBARRIER_CMD_SHARED)) {
    +		ksft_test_result_fail("sys_membarrier() CMD_SHARED query failed\n");
     		ksft_exit_fail_msg("sys_membarrier is not supported.\n");
    +	}
     
     	ksft_test_result_pass("sys_membarrier available\n");
     	return 0;
    -- 
    2.11.0
    
    ^ permalink raw reply related	[flat|nested] 28+ messages in thread
  • * [RFC PATCH for 4.15 10/10] membarrier: selftest: Test shared expedited cmd
           [not found] <20171110213717.12457-1-mathieu.desnoyers@efficios.com>
                       ` (3 preceding siblings ...)
      2017-11-10 21:37 ` [RFC PATCH for 4.15 08/10] membarrier: selftest: Test private expedited sync core cmd Mathieu Desnoyers
    @ 2017-11-10 21:37 ` Mathieu Desnoyers
      2017-11-10 21:37   ` Mathieu Desnoyers
      4 siblings, 1 reply; 28+ messages in thread
    From: Mathieu Desnoyers @ 2017-11-10 21:37 UTC (permalink / raw)
      To: Boqun Feng, Peter Zijlstra, Paul E . McKenney
      Cc: linux-kernel, linux-api, Andy Lutomirski, Andrew Hunter,
    	Maged Michael, Avi Kivity, Benjamin Herrenschmidt, Paul Mackerras,
    	Michael Ellerman, Dave Watson, Thomas Gleixner, Ingo Molnar,
    	H . Peter Anvin, Andrea Parri, Russell King, Greg Hackmann,
    	Will Deacon, David Sehr, Linus Torvalds, x86, Mathieu Desnoyers
    
    Test the new MEMBARRIER_CMD_SHARED_EXPEDITED and
    MEMBARRIER_CMD_REGISTER_SHARED_EXPEDITED commands.
    
    Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
    CC: Shuah Khan <shuahkh@osg.samsung.com>
    CC: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    CC: Peter Zijlstra <peterz@infradead.org>
    CC: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
    CC: Boqun Feng <boqun.feng@gmail.com>
    CC: Andrew Hunter <ahh@google.com>
    CC: Maged Michael <maged.michael@gmail.com>
    CC: Avi Kivity <avi@scylladb.com>
    CC: Benjamin Herrenschmidt <benh@kernel.crashing.org>
    CC: Paul Mackerras <paulus@samba.org>
    CC: Michael Ellerman <mpe@ellerman.id.au>
    CC: Dave Watson <davejwatson@fb.com>
    CC: Alan Stern <stern@rowland.harvard.edu>
    CC: Will Deacon <will.deacon@arm.com>
    CC: Andy Lutomirski <luto@kernel.org>
    CC: Alice Ferrazzi <alice.ferrazzi@gmail.com>
    CC: Paul Elder <paul.elder@pitt.edu>
    CC: linux-kselftest@vger.kernel.org
    CC: linux-arch@vger.kernel.org
    ---
     .../testing/selftests/membarrier/membarrier_test.c | 47 ++++++++++++++++++++++
     1 file changed, 47 insertions(+)
    
    diff --git a/tools/testing/selftests/membarrier/membarrier_test.c b/tools/testing/selftests/membarrier/membarrier_test.c
    index a0eae8d51e72..c699227f4d9a 100644
    --- a/tools/testing/selftests/membarrier/membarrier_test.c
    +++ b/tools/testing/selftests/membarrier/membarrier_test.c
    @@ -189,6 +189,40 @@ static int test_membarrier_private_expedited_sync_core_success(void)
     	return 0;
     }
     
    +static int test_membarrier_register_shared_expedited_success(void)
    +{
    +	int cmd = MEMBARRIER_CMD_REGISTER_SHARED_EXPEDITED, flags = 0;
    +	const char *test_name = "sys membarrier MEMBARRIER_CMD_REGISTER_SHARED_EXPEDITED";
    +
    +	if (sys_membarrier(cmd, flags) != 0) {
    +		ksft_exit_fail_msg(
    +			"%s test: flags = %d, errno = %d\n",
    +			test_name, flags, errno);
    +	}
    +
    +	ksft_test_result_pass(
    +		"%s test: flags = %d\n",
    +		test_name, flags);
    +	return 0;
    +}
    +
    +static int test_membarrier_shared_expedited_success(void)
    +{
    +	int cmd = MEMBARRIER_CMD_SHARED_EXPEDITED, flags = 0;
    +	const char *test_name = "sys membarrier MEMBARRIER_CMD_SHARED_EXPEDITED";
    +
    +	if (sys_membarrier(cmd, flags) != 0) {
    +		ksft_exit_fail_msg(
    +			"%s test: flags = %d, errno = %d\n",
    +			test_name, flags, errno);
    +	}
    +
    +	ksft_test_result_pass(
    +		"%s test: flags = %d\n",
    +		test_name, flags);
    +	return 0;
    +}
    +
     static int test_membarrier(void)
     {
     	int status;
    @@ -227,6 +261,19 @@ static int test_membarrier(void)
     		if (status)
     			return status;
     	}
    +	/*
    +	 * It is valid to send a shared membarrier from a non-registered
    +	 * process.
    +	 */
    +	status = test_membarrier_shared_expedited_success();
    +	if (status)
    +		return status;
    +	status = test_membarrier_register_shared_expedited_success();
    +	if (status)
    +		return status;
    +	status = test_membarrier_shared_expedited_success();
    +	if (status)
    +		return status;
     	return 0;
     }
     
    -- 
    2.11.0
    
    ^ permalink raw reply related	[flat|nested] 28+ messages in thread

  • end of thread, other threads:[~2017-11-10 23:13 UTC | newest]
    
    Thread overview: 28+ messages (download: mbox.gz follow: Atom feed
    -- links below jump to the message on this page --
         [not found] <20171110213717.12457-1-mathieu.desnoyers@efficios.com>
         [not found] ` <20171110213717.12457-1-mathieu.desnoyers-vg+e7yoeK/dWk0Htik3J/w@public.gmane.org>
    2017-11-10 21:37   ` [RFC PATCH for 4.15 01/10] membarrier: selftest: Test private expedited cmd Mathieu Desnoyers
    2017-11-10 21:37     ` Mathieu Desnoyers
    2017-11-10 21:37   ` [RFC PATCH v7 for 4.15 02/10] membarrier: powerpc: Skip memory barrier in switch_mm() Mathieu Desnoyers
    2017-11-10 21:37     ` Mathieu Desnoyers
    2017-11-10 21:37   ` [RFC PATCH for 4.15 04/10] membarrier: Provide core serializing command Mathieu Desnoyers
    2017-11-10 21:37     ` Mathieu Desnoyers
    2017-11-10 21:37   ` [RFC PATCH for 4.15 05/10] x86: Introduce sync_core_before_usermode Mathieu Desnoyers
    2017-11-10 21:37     ` Mathieu Desnoyers
    2017-11-10 22:02     ` Andy Lutomirski
    2017-11-10 22:02       ` Andy Lutomirski
         [not found]       ` <CALCETrWV+bgUPoS7NqVYhoi7hOyvsfoWw5CnyMrkYz=HYznmXQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
    2017-11-10 22:20         ` Mathieu Desnoyers
    2017-11-10 22:20           ` Mathieu Desnoyers
    2017-11-10 22:32           ` Mathieu Desnoyers
    2017-11-10 22:32             ` Mathieu Desnoyers
    2017-11-10 23:13             ` Mathieu Desnoyers
    2017-11-10 23:13               ` Mathieu Desnoyers
    2017-11-10 22:36           ` Andy Lutomirski
    2017-11-10 22:36             ` Andy Lutomirski
    2017-11-10 22:39             ` Mathieu Desnoyers
    2017-11-10 22:39               ` Mathieu Desnoyers
    2017-11-10 21:37 ` [RFC PATCH for 4.15 06/10] Fix: x86: Add missing core serializing instruction on migration Mathieu Desnoyers
    2017-11-10 21:37   ` Mathieu Desnoyers
    2017-11-10 21:37 ` [RFC PATCH v2 for 4.15 07/10] membarrier: x86: Provide core serializing command Mathieu Desnoyers
    2017-11-10 21:37   ` Mathieu Desnoyers
    2017-11-10 21:37 ` [RFC PATCH for 4.15 08/10] membarrier: selftest: Test private expedited sync core cmd Mathieu Desnoyers
    2017-11-10 21:37   ` Mathieu Desnoyers
    2017-11-10 21:37 ` [RFC PATCH for 4.15 10/10] membarrier: selftest: Test shared expedited cmd Mathieu Desnoyers
    2017-11-10 21:37   ` Mathieu Desnoyers
    

    This is a public inbox, see mirroring instructions
    for how to clone and mirror all data and code used for this inbox