linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v10 11/12] arm64: factor work_pending state machine to C
       [not found] <1456949376-4910-1-git-send-email-cmetcalf@ezchip.com>
@ 2016-03-02 20:09 ` Chris Metcalf
  2016-03-04 16:38   ` Will Deacon
  2016-03-02 20:09 ` [PATCH v10 12/12] arch/arm64: enable task isolation functionality Chris Metcalf
  1 sibling, 1 reply; 5+ messages in thread
From: Chris Metcalf @ 2016-03-02 20:09 UTC (permalink / raw)
  To: linux-arm-kernel

Currently ret_fast_syscall, work_pending, and ret_to_user form an ad-hoc
state machine that can be difficult to reason about due to duplicated
code and a large number of branch targets.

This patch factors the common logic out into the existing
do_notify_resume function, converting the code to C in the process,
making the code more legible.

This patch tries to closely mirror the existing behaviour while using
the usual C control flow primitives. As local_irq_{disable,enable} may
be instrumented, we balance exception entry (where we will almost most
likely enable IRQs) with a call to trace_hardirqs_on just before the
return to userspace.

Signed-off-by: Chris Metcalf <cmetcalf@ezchip.com>
---
 arch/arm64/kernel/entry.S  | 12 ++++--------
 arch/arm64/kernel/signal.c | 30 ++++++++++++++++++++++--------
 2 files changed, 26 insertions(+), 16 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index 1f7f5a2b61bf..966d0d4308f2 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -674,18 +674,13 @@ ret_fast_syscall_trace:
  * Ok, we need to do extra processing, enter the slow path.
  */
 work_pending:
-	tbnz	x1, #TIF_NEED_RESCHED, work_resched
-	/* TIF_SIGPENDING, TIF_NOTIFY_RESUME or TIF_FOREIGN_FPSTATE case */
 	mov	x0, sp				// 'regs'
-	enable_irq				// enable interrupts for do_notify_resume()
 	bl	do_notify_resume
-	b	ret_to_user
-work_resched:
 #ifdef CONFIG_TRACE_IRQFLAGS
-	bl	trace_hardirqs_off		// the IRQs are off here, inform the tracing code
+	bl	trace_hardirqs_on		// enabled while in userspace
 #endif
-	bl	schedule
-
+	ldr	x1, [tsk, #TI_FLAGS]		// re-check for single-step
+	b	finish_ret_to_user
 /*
  * "slow" syscall return path.
  */
@@ -694,6 +689,7 @@ ret_to_user:
 	ldr	x1, [tsk, #TI_FLAGS]
 	and	x2, x1, #_TIF_WORK_MASK
 	cbnz	x2, work_pending
+finish_ret_to_user:
 	enable_step_tsk x1, x2
 	kernel_exit 0
 ENDPROC(ret_to_user)
diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
index e18c48cb6db1..3432e14b7d6e 100644
--- a/arch/arm64/kernel/signal.c
+++ b/arch/arm64/kernel/signal.c
@@ -402,15 +402,29 @@ static void do_signal(struct pt_regs *regs)
 asmlinkage void do_notify_resume(struct pt_regs *regs,
 				 unsigned int thread_flags)
 {
-	if (thread_flags & _TIF_SIGPENDING)
-		do_signal(regs);
+	while (true) {
 
-	if (thread_flags & _TIF_NOTIFY_RESUME) {
-		clear_thread_flag(TIF_NOTIFY_RESUME);
-		tracehook_notify_resume(regs);
-	}
+		if (thread_flags & _TIF_NEED_RESCHED) {
+			schedule();
+		} else {
+			local_irq_enable();
+
+			if (thread_flags & _TIF_SIGPENDING)
+				do_signal(regs);
 
-	if (thread_flags & _TIF_FOREIGN_FPSTATE)
-		fpsimd_restore_current_state();
+			if (thread_flags & _TIF_NOTIFY_RESUME) {
+				clear_thread_flag(TIF_NOTIFY_RESUME);
+				tracehook_notify_resume(regs);
+			}
+
+			if (thread_flags & _TIF_FOREIGN_FPSTATE)
+				fpsimd_restore_current_state();
+		}
 
+		local_irq_disable();
+
+		thread_flags = READ_ONCE(current_thread_info()->flags);
+		if (!(thread_flags & _TIF_WORK_MASK))
+			break;
+	}
 }
-- 
2.1.2

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v10 12/12] arch/arm64: enable task isolation functionality
       [not found] <1456949376-4910-1-git-send-email-cmetcalf@ezchip.com>
  2016-03-02 20:09 ` [PATCH v10 11/12] arm64: factor work_pending state machine to C Chris Metcalf
@ 2016-03-02 20:09 ` Chris Metcalf
  1 sibling, 0 replies; 5+ messages in thread
From: Chris Metcalf @ 2016-03-02 20:09 UTC (permalink / raw)
  To: linux-arm-kernel

In do_notify_resume(), call task_isolation_ready() when we are
checking the thread-info flags; and after we've handled the other
work, call task_isolation_enter() unconditionally.  To ensure we
always call task_isolation_enter() when returning to userspace,
modify _TIF_WORK_MASK to be _TIF_NOHZ, which is set in every task,
when we build with TASK_ISOLATION configured.

We tweak syscall_trace_enter() slightly to carry the "flags"
value from current_thread_info()->flags for each of the tests,
rather than doing a volatile read from memory for each one.  This
avoids a small overhead for each test, and in particular avoids
that overhead for TIF_NOHZ when TASK_ISOLATION is not enabled.

We instrument the smp_cross_call() routine so that it checks for
isolated tasks and generates a suitable warning if we are about
to disturb one of them in strict or debug mode.

Finally, add an explicit check for STRICT mode in do_mem_abort()
to handle the case of page faults.

Signed-off-by: Chris Metcalf <cmetcalf@ezchip.com>
---
 arch/arm64/include/asm/thread_info.h |  8 +++++++-
 arch/arm64/kernel/ptrace.c           | 12 +++++++++---
 arch/arm64/kernel/signal.c           |  6 +++++-
 arch/arm64/kernel/smp.c              |  2 ++
 arch/arm64/mm/fault.c                |  4 ++++
 5 files changed, 27 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index abd64bd1f6d9..89c72888cb54 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -131,9 +131,15 @@ static inline struct thread_info *current_thread_info(void)
 #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
 #define _TIF_32BIT		(1 << TIF_32BIT)
 
-#define _TIF_WORK_MASK		(_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
+#define _TIF_WORK_LOOP_MASK	(_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
 				 _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE)
 
+#ifdef CONFIG_TASK_ISOLATION
+# define _TIF_WORK_MASK		_TIF_NOHZ  /* always set */
+#else
+# define _TIF_WORK_MASK		_TIF_WORK_LOOP_MASK
+#endif
+
 #define _TIF_SYSCALL_WORK	(_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
 				 _TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \
 				 _TIF_NOHZ)
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index ff7f13239515..43aa6d016f46 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -37,6 +37,7 @@
 #include <linux/regset.h>
 #include <linux/tracehook.h>
 #include <linux/elf.h>
+#include <linux/isolation.h>
 
 #include <asm/compat.h>
 #include <asm/debug-monitors.h>
@@ -1246,14 +1247,19 @@ static void tracehook_report_syscall(struct pt_regs *regs,
 
 asmlinkage int syscall_trace_enter(struct pt_regs *regs)
 {
-	/* Do the secure computing check first; failures should be fast. */
+	unsigned long work = ACCESS_ONCE(current_thread_info()->flags);
+
+	if ((work & _TIF_NOHZ) && task_isolation_check_syscall(regs->syscallno))
+		return -1;
+
+	/* Do the secure computing check early; failures should be fast. */
 	if (secure_computing() == -1)
 		return -1;
 
-	if (test_thread_flag(TIF_SYSCALL_TRACE))
+	if (work & _TIF_SYSCALL_TRACE)
 		tracehook_report_syscall(regs, PTRACE_SYSCALL_ENTER);
 
-	if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
+	if (work & _TIF_SYSCALL_TRACEPOINT)
 		trace_sys_enter(regs, regs->syscallno);
 
 	audit_syscall_entry(regs->syscallno, regs->orig_x0, regs->regs[1],
diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
index 3432e14b7d6e..53fcd6c305d6 100644
--- a/arch/arm64/kernel/signal.c
+++ b/arch/arm64/kernel/signal.c
@@ -25,6 +25,7 @@
 #include <linux/uaccess.h>
 #include <linux/tracehook.h>
 #include <linux/ratelimit.h>
+#include <linux/isolation.h>
 
 #include <asm/debug-monitors.h>
 #include <asm/elf.h>
@@ -419,12 +420,15 @@ asmlinkage void do_notify_resume(struct pt_regs *regs,
 
 			if (thread_flags & _TIF_FOREIGN_FPSTATE)
 				fpsimd_restore_current_state();
+
+			task_isolation_enter();
 		}
 
 		local_irq_disable();
 
 		thread_flags = READ_ONCE(current_thread_info()->flags);
-		if (!(thread_flags & _TIF_WORK_MASK))
+		if (!(thread_flags & _TIF_WORK_LOOP_MASK) &&
+		    task_isolation_ready())
 			break;
 	}
 }
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index b1adc51b2c2e..dcb3282d04a2 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -37,6 +37,7 @@
 #include <linux/completion.h>
 #include <linux/of.h>
 #include <linux/irq_work.h>
+#include <linux/isolation.h>
 
 #include <asm/alternative.h>
 #include <asm/atomic.h>
@@ -632,6 +633,7 @@ static const char *ipi_types[NR_IPI] __tracepoint_string = {
 static void smp_cross_call(const struct cpumask *target, unsigned int ipinr)
 {
 	trace_ipi_raise(target, ipi_types[ipinr]);
+	task_isolation_debug_cpumask(target);
 	__smp_cross_call(target, ipinr);
 }
 
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index abe2a9542b3a..644cd634dd1d 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -29,6 +29,7 @@
 #include <linux/sched.h>
 #include <linux/highmem.h>
 #include <linux/perf_event.h>
+#include <linux/isolation.h>
 
 #include <asm/cpufeature.h>
 #include <asm/exception.h>
@@ -473,6 +474,9 @@ asmlinkage void __exception do_mem_abort(unsigned long addr, unsigned int esr,
 	const struct fault_info *inf = fault_info + (esr & 63);
 	struct siginfo info;
 
+	if (user_mode(regs))
+		task_isolation_check_exception("%s at %#lx", inf->name, addr);
+
 	if (!inf->fn(addr, esr, regs))
 		return;
 
-- 
2.1.2

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v10 11/12] arm64: factor work_pending state machine to C
  2016-03-02 20:09 ` [PATCH v10 11/12] arm64: factor work_pending state machine to C Chris Metcalf
@ 2016-03-04 16:38   ` Will Deacon
  2016-03-04 20:02     ` Chris Metcalf
  0 siblings, 1 reply; 5+ messages in thread
From: Will Deacon @ 2016-03-04 16:38 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Chris,

On Wed, Mar 02, 2016 at 03:09:35PM -0500, Chris Metcalf wrote:
> Currently ret_fast_syscall, work_pending, and ret_to_user form an ad-hoc
> state machine that can be difficult to reason about due to duplicated
> code and a large number of branch targets.
> 
> This patch factors the common logic out into the existing
> do_notify_resume function, converting the code to C in the process,
> making the code more legible.
> 
> This patch tries to closely mirror the existing behaviour while using
> the usual C control flow primitives. As local_irq_{disable,enable} may
> be instrumented, we balance exception entry (where we will almost most
> likely enable IRQs) with a call to trace_hardirqs_on just before the
> return to userspace.

[...]

> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index 1f7f5a2b61bf..966d0d4308f2 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -674,18 +674,13 @@ ret_fast_syscall_trace:
>   * Ok, we need to do extra processing, enter the slow path.
>   */
>  work_pending:
> -	tbnz	x1, #TIF_NEED_RESCHED, work_resched
> -	/* TIF_SIGPENDING, TIF_NOTIFY_RESUME or TIF_FOREIGN_FPSTATE case */
>  	mov	x0, sp				// 'regs'
> -	enable_irq				// enable interrupts for do_notify_resume()
>  	bl	do_notify_resume
> -	b	ret_to_user
> -work_resched:
>  #ifdef CONFIG_TRACE_IRQFLAGS
> -	bl	trace_hardirqs_off		// the IRQs are off here, inform the tracing code
> +	bl	trace_hardirqs_on		// enabled while in userspace

This doesn't look right to me. We only get here after running
do_notify_resume, which returns with interrupts disabled.

Do we not instead need to inform the tracing code that interrupts are
disabled prior to calling do_notify_resume?

> diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
> index e18c48cb6db1..3432e14b7d6e 100644
> --- a/arch/arm64/kernel/signal.c
> +++ b/arch/arm64/kernel/signal.c
> @@ -402,15 +402,29 @@ static void do_signal(struct pt_regs *regs)
>  asmlinkage void do_notify_resume(struct pt_regs *regs,
>  				 unsigned int thread_flags)
>  {
> -	if (thread_flags & _TIF_SIGPENDING)
> -		do_signal(regs);
> +	while (true) {
>  
> -	if (thread_flags & _TIF_NOTIFY_RESUME) {
> -		clear_thread_flag(TIF_NOTIFY_RESUME);
> -		tracehook_notify_resume(regs);
> -	}
> +		if (thread_flags & _TIF_NEED_RESCHED) {
> +			schedule();
> +		} else {
> +			local_irq_enable();
> +
> +			if (thread_flags & _TIF_SIGPENDING)
> +				do_signal(regs);
>  
> -	if (thread_flags & _TIF_FOREIGN_FPSTATE)
> -		fpsimd_restore_current_state();
> +			if (thread_flags & _TIF_NOTIFY_RESUME) {
> +				clear_thread_flag(TIF_NOTIFY_RESUME);
> +				tracehook_notify_resume(regs);
> +			}
> +
> +			if (thread_flags & _TIF_FOREIGN_FPSTATE)
> +				fpsimd_restore_current_state();
> +		}
>  
> +		local_irq_disable();
> +
> +		thread_flags = READ_ONCE(current_thread_info()->flags);
> +		if (!(thread_flags & _TIF_WORK_MASK))
> +			break;
> +	}

This might be easier to read as a do { ... } while.

Will

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v10 11/12] arm64: factor work_pending state machine to C
  2016-03-04 16:38   ` Will Deacon
@ 2016-03-04 20:02     ` Chris Metcalf
  2016-03-14 10:29       ` Mark Rutland
  0 siblings, 1 reply; 5+ messages in thread
From: Chris Metcalf @ 2016-03-04 20:02 UTC (permalink / raw)
  To: linux-arm-kernel

On 03/04/2016 11:38 AM, Will Deacon wrote:
> Hi Chris,
>
> On Wed, Mar 02, 2016 at 03:09:35PM -0500, Chris Metcalf wrote:
>> Currently ret_fast_syscall, work_pending, and ret_to_user form an ad-hoc
>> state machine that can be difficult to reason about due to duplicated
>> code and a large number of branch targets.
>>
>> This patch factors the common logic out into the existing
>> do_notify_resume function, converting the code to C in the process,
>> making the code more legible.
>>
>> This patch tries to closely mirror the existing behaviour while using
>> the usual C control flow primitives. As local_irq_{disable,enable} may
>> be instrumented, we balance exception entry (where we will almost most
>> likely enable IRQs) with a call to trace_hardirqs_on just before the
>> return to userspace.
> [...]
>
>> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
>> index 1f7f5a2b61bf..966d0d4308f2 100644
>> --- a/arch/arm64/kernel/entry.S
>> +++ b/arch/arm64/kernel/entry.S
>> @@ -674,18 +674,13 @@ ret_fast_syscall_trace:
>>    * Ok, we need to do extra processing, enter the slow path.
>>    */
>>   work_pending:
>> -	tbnz	x1, #TIF_NEED_RESCHED, work_resched
>> -	/* TIF_SIGPENDING, TIF_NOTIFY_RESUME or TIF_FOREIGN_FPSTATE case */
>>   	mov	x0, sp				// 'regs'
>> -	enable_irq				// enable interrupts for do_notify_resume()
>>   	bl	do_notify_resume
>> -	b	ret_to_user
>> -work_resched:
>>   #ifdef CONFIG_TRACE_IRQFLAGS
>> -	bl	trace_hardirqs_off		// the IRQs are off here, inform the tracing code
>> +	bl	trace_hardirqs_on		// enabled while in userspace
> This doesn't look right to me. We only get here after running
> do_notify_resume, which returns with interrupts disabled.
>
> Do we not instead need to inform the tracing code that interrupts are
> disabled prior to calling do_notify_resume?

I think you are right about the trace_hardirqs_off prior to
calling into do_notify_resume, given Catalin's recent commit to
add it.  I dropped it since I was moving schedule() into C code,
but I suspect we'll see the same problem that Catalin saw with
CONFIG_TRACE_IRQFLAGS without it.  I'll copy the arch/arm approach
and add a trace_hardirqs_off() at the top of do_notify_resume().

The trace_hardirqs_on I was copying from Mark Rutland's earlier patch:

http://permalink.gmane.org/gmane.linux.ports.arm.kernel/467781

I don't know if it's necessary to flag that interrupts are enabled
prior to returning to userspace; it may well not be.  Mark, can you
comment on what led you to add that trace_hardirqs_on?

For now I've left both of them in there.

>> diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
>> index e18c48cb6db1..3432e14b7d6e 100644
>> --- a/arch/arm64/kernel/signal.c
>> +++ b/arch/arm64/kernel/signal.c
>> @@ -402,15 +402,29 @@ static void do_signal(struct pt_regs *regs)
>>   asmlinkage void do_notify_resume(struct pt_regs *regs,
>>   				 unsigned int thread_flags)
>>   {
>> -	if (thread_flags & _TIF_SIGPENDING)
>> -		do_signal(regs);
>> +	while (true) {
>> [...]
>> +	}
> This might be easier to read as a do { ... } while.

Yes, and in fact that's how I did it for arch/tile, as the maintainer.
I picked up the arch/x86 version as more canonical to copy.  But I'm
more than happy to do it the other way :-).  Fixed.

-- 
Chris Metcalf, EZChip Semiconductor
http://www.ezchip.com

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v10 11/12] arm64: factor work_pending state machine to C
  2016-03-04 20:02     ` Chris Metcalf
@ 2016-03-14 10:29       ` Mark Rutland
  0 siblings, 0 replies; 5+ messages in thread
From: Mark Rutland @ 2016-03-14 10:29 UTC (permalink / raw)
  To: linux-arm-kernel

Hi,

On Fri, Mar 04, 2016 at 03:02:47PM -0500, Chris Metcalf wrote:
> On 03/04/2016 11:38 AM, Will Deacon wrote:
> >Hi Chris,
> >
> >On Wed, Mar 02, 2016 at 03:09:35PM -0500, Chris Metcalf wrote:
> >>Currently ret_fast_syscall, work_pending, and ret_to_user form an ad-hoc
> >>state machine that can be difficult to reason about due to duplicated
> >>code and a large number of branch targets.
> >>
> >>This patch factors the common logic out into the existing
> >>do_notify_resume function, converting the code to C in the process,
> >>making the code more legible.
> >>
> >>This patch tries to closely mirror the existing behaviour while using
> >>the usual C control flow primitives. As local_irq_{disable,enable} may
> >>be instrumented, we balance exception entry (where we will almost most
> >>likely enable IRQs) with a call to trace_hardirqs_on just before the
> >>return to userspace.
> >[...]
> >
> >>diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> >>index 1f7f5a2b61bf..966d0d4308f2 100644
> >>--- a/arch/arm64/kernel/entry.S
> >>+++ b/arch/arm64/kernel/entry.S
> >>@@ -674,18 +674,13 @@ ret_fast_syscall_trace:
> >>   * Ok, we need to do extra processing, enter the slow path.
> >>   */
> >>  work_pending:
> >>-	tbnz	x1, #TIF_NEED_RESCHED, work_resched
> >>-	/* TIF_SIGPENDING, TIF_NOTIFY_RESUME or TIF_FOREIGN_FPSTATE case */
> >>  	mov	x0, sp				// 'regs'
> >>-	enable_irq				// enable interrupts for do_notify_resume()
> >>  	bl	do_notify_resume
> >>-	b	ret_to_user
> >>-work_resched:
> >>  #ifdef CONFIG_TRACE_IRQFLAGS
> >>-	bl	trace_hardirqs_off		// the IRQs are off here, inform the tracing code
> >>+	bl	trace_hardirqs_on		// enabled while in userspace
> >This doesn't look right to me. We only get here after running
> >do_notify_resume, which returns with interrupts disabled.
> >
> >Do we not instead need to inform the tracing code that interrupts are
> >disabled prior to calling do_notify_resume?
> 
> I think you are right about the trace_hardirqs_off prior to
> calling into do_notify_resume, given Catalin's recent commit to
> add it.  I dropped it since I was moving schedule() into C code,
> but I suspect we'll see the same problem that Catalin saw with
> CONFIG_TRACE_IRQFLAGS without it.  I'll copy the arch/arm approach
> and add a trace_hardirqs_off() at the top of do_notify_resume().
> 
> The trace_hardirqs_on I was copying from Mark Rutland's earlier patch:
> 
> http://permalink.gmane.org/gmane.linux.ports.arm.kernel/467781
> 
> I don't know if it's necessary to flag that interrupts are enabled
> prior to returning to userspace; it may well not be.  Mark, can you
> comment on what led you to add that trace_hardirqs_on?

>From what I recall, we didn't properly trace enabling IRQs in all the
asm entry paths from userspace, and doing this made things appear
balanced to the tracing code (as the existing behaviour of masking IRQs
in assembly did).

It was more expedient / simpler than fixing all the entry assembly to
update the IRQ tracing state correctly, which I had expected to rework
if/when moving the rest to C.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2016-03-14 10:29 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <1456949376-4910-1-git-send-email-cmetcalf@ezchip.com>
2016-03-02 20:09 ` [PATCH v10 11/12] arm64: factor work_pending state machine to C Chris Metcalf
2016-03-04 16:38   ` Will Deacon
2016-03-04 20:02     ` Chris Metcalf
2016-03-14 10:29       ` Mark Rutland
2016-03-02 20:09 ` [PATCH v10 12/12] arch/arm64: enable task isolation functionality Chris Metcalf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).