* [PATCH v3 1/7] arm64: ptrace: Consistently use pseudo-singlestep exceptions
[not found] <20200710130702.30658-1-will@kernel.org>
@ 2020-07-10 13:06 ` Will Deacon
2020-07-16 0:27 ` Sasha Levin
2020-07-10 13:06 ` [PATCH v3 2/7] arm64: ptrace: Override SPSR.SS when single-stepping is enabled Will Deacon
2020-07-10 13:06 ` [PATCH v3 3/7] arm64: compat: Ensure upper 32 bits of x0 are zero on syscall return Will Deacon
2 siblings, 1 reply; 6+ messages in thread
From: Will Deacon @ 2020-07-10 13:06 UTC (permalink / raw)
To: linux-arm-kernel
Cc: catalin.marinas, Will Deacon, kernel-team, Mark Rutland,
Luis Machado, Keno Fischer, Kees Cook, stable
Although the arm64 single-step state machine can be fast-forwarded in
cases where we wish to generate a SIGTRAP without actually executing an
instruction, this has two major limitations outside of simply skipping
an instruction due to emulation.
1. Stepping out of a ptrace signal stop into a signal handler where
SIGTRAP is blocked. Fast-forwarding the stepping state machine in
this case will result in a forced SIGTRAP, with the handler reset to
SIG_DFL.
2. The hardware implicitly fast-forwards the state machine when executing
an SVC instruction for issuing a system call. This can interact badly
with subsequent ptrace stops signalled during the execution of the
system call (e.g. SYSCALL_EXIT or seccomp traps), as they may corrupt
the stepping state by updating the PSTATE for the tracee.
Resolve both of these issues by injecting a pseudo-singlestep exception
on entry to a signal handler and also on return to userspace following a
system call.
Cc: <stable@vger.kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Luis Machado <luis.machado@linaro.org>
Reported-by: Keno Fischer <keno@juliacomputing.com>
Signed-off-by: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/thread_info.h | 1 +
arch/arm64/kernel/ptrace.c | 25 +++++++++++++++++++------
arch/arm64/kernel/signal.c | 11 ++---------
arch/arm64/kernel/syscall.c | 2 +-
4 files changed, 23 insertions(+), 16 deletions(-)
diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index 6ea8b6a26ae9..5e784e16ee89 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -93,6 +93,7 @@ void arch_release_task_struct(struct task_struct *tsk);
#define _TIF_SYSCALL_EMU (1 << TIF_SYSCALL_EMU)
#define _TIF_UPROBE (1 << TIF_UPROBE)
#define _TIF_FSCHECK (1 << TIF_FSCHECK)
+#define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP)
#define _TIF_32BIT (1 << TIF_32BIT)
#define _TIF_SVE (1 << TIF_SVE)
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 68b7f34a08f5..057d4aa1af4d 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -1818,12 +1818,23 @@ static void tracehook_report_syscall(struct pt_regs *regs,
saved_reg = regs->regs[regno];
regs->regs[regno] = dir;
- if (dir == PTRACE_SYSCALL_EXIT)
+ if (dir == PTRACE_SYSCALL_ENTER) {
+ if (tracehook_report_syscall_entry(regs))
+ forget_syscall(regs);
+ regs->regs[regno] = saved_reg;
+ } else if (!test_thread_flag(TIF_SINGLESTEP)) {
tracehook_report_syscall_exit(regs, 0);
- else if (tracehook_report_syscall_entry(regs))
- forget_syscall(regs);
+ regs->regs[regno] = saved_reg;
+ } else {
+ regs->regs[regno] = saved_reg;
- regs->regs[regno] = saved_reg;
+ /*
+ * Signal a pseudo-step exception since we are stepping but
+ * tracer modifications to the registers may have rewound the
+ * state machine.
+ */
+ tracehook_report_syscall_exit(regs, 1);
+ }
}
int syscall_trace_enter(struct pt_regs *regs)
@@ -1851,12 +1862,14 @@ int syscall_trace_enter(struct pt_regs *regs)
void syscall_trace_exit(struct pt_regs *regs)
{
+ unsigned long flags = READ_ONCE(current_thread_info()->flags);
+
audit_syscall_exit(regs);
- if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
+ if (flags & _TIF_SYSCALL_TRACEPOINT)
trace_sys_exit(regs, regs_return_value(regs));
- if (test_thread_flag(TIF_SYSCALL_TRACE))
+ if (flags & (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP))
tracehook_report_syscall(regs, PTRACE_SYSCALL_EXIT);
rseq_syscall(regs);
diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
index 801d56cdf701..3b4f31f35e45 100644
--- a/arch/arm64/kernel/signal.c
+++ b/arch/arm64/kernel/signal.c
@@ -800,7 +800,6 @@ static void setup_restart_syscall(struct pt_regs *regs)
*/
static void handle_signal(struct ksignal *ksig, struct pt_regs *regs)
{
- struct task_struct *tsk = current;
sigset_t *oldset = sigmask_to_save();
int usig = ksig->sig;
int ret;
@@ -824,14 +823,8 @@ static void handle_signal(struct ksignal *ksig, struct pt_regs *regs)
*/
ret |= !valid_user_regs(®s->user_regs, current);
- /*
- * Fast forward the stepping logic so we step into the signal
- * handler.
- */
- if (!ret)
- user_fastforward_single_step(tsk);
-
- signal_setup_done(ret, ksig, 0);
+ /* Step into the signal handler if we are stepping */
+ signal_setup_done(ret, ksig, test_thread_flag(TIF_SINGLESTEP));
}
/*
diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
index 5f5b868292f5..7c14466a12af 100644
--- a/arch/arm64/kernel/syscall.c
+++ b/arch/arm64/kernel/syscall.c
@@ -139,7 +139,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
if (!has_syscall_work(flags) && !IS_ENABLED(CONFIG_DEBUG_RSEQ)) {
local_daif_mask();
flags = current_thread_info()->flags;
- if (!has_syscall_work(flags)) {
+ if (!has_syscall_work(flags) && !(flags & _TIF_SINGLESTEP)) {
/*
* We're off to userspace, where interrupts are
* always enabled after we restore the flags from
--
2.27.0.383.g050319c2ae-goog
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v3 2/7] arm64: ptrace: Override SPSR.SS when single-stepping is enabled
[not found] <20200710130702.30658-1-will@kernel.org>
2020-07-10 13:06 ` [PATCH v3 1/7] arm64: ptrace: Consistently use pseudo-singlestep exceptions Will Deacon
@ 2020-07-10 13:06 ` Will Deacon
2020-07-16 0:27 ` Sasha Levin
2020-07-10 13:06 ` [PATCH v3 3/7] arm64: compat: Ensure upper 32 bits of x0 are zero on syscall return Will Deacon
2 siblings, 1 reply; 6+ messages in thread
From: Will Deacon @ 2020-07-10 13:06 UTC (permalink / raw)
To: linux-arm-kernel
Cc: catalin.marinas, Will Deacon, kernel-team, Mark Rutland,
Luis Machado, Keno Fischer, Kees Cook, stable
Luis reports that, when reverse debugging with GDB, single-step does not
function as expected on arm64:
| I've noticed, under very specific conditions, that a PTRACE_SINGLESTEP
| request by GDB won't execute the underlying instruction. As a consequence,
| the PC doesn't move, but we return a SIGTRAP just like we would for a
| regular successful PTRACE_SINGLESTEP request.
The underlying problem is that when the CPU register state is restored
as part of a reverse step, the SPSR.SS bit is cleared and so the hardware
single-step state can transition to the "active-pending" state, causing
an unexpected step exception to be taken immediately if a step operation
is attempted.
In hindsight, we probably shouldn't have exposed SPSR.SS in the pstate
accessible by the GPR regset, but it's a bit late for that now. Instead,
simply prevent userspace from configuring the bit to a value which is
inconsistent with the TIF_SINGLESTEP state for the task being traced.
Cc: <stable@vger.kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Keno Fischer <keno@juliacomputing.com>
Link: https://lore.kernel.org/r/1eed6d69-d53d-9657-1fc9-c089be07f98c@linaro.org
Reported-by: Luis Machado <luis.machado@linaro.org>
Tested-by: Luis Machado <luis.machado@linaro.org>
Signed-off-by: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/debug-monitors.h | 2 ++
arch/arm64/kernel/debug-monitors.c | 20 ++++++++++++++++----
arch/arm64/kernel/ptrace.c | 4 ++--
3 files changed, 20 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/asm/debug-monitors.h b/arch/arm64/include/asm/debug-monitors.h
index e5ceea213e39..0b298f48f5bf 100644
--- a/arch/arm64/include/asm/debug-monitors.h
+++ b/arch/arm64/include/asm/debug-monitors.h
@@ -109,6 +109,8 @@ void disable_debug_monitors(enum dbg_active_el el);
void user_rewind_single_step(struct task_struct *task);
void user_fastforward_single_step(struct task_struct *task);
+void user_regs_reset_single_step(struct user_pt_regs *regs,
+ struct task_struct *task);
void kernel_enable_single_step(struct pt_regs *regs);
void kernel_disable_single_step(void);
diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
index 5df49366e9ab..91146c0a3691 100644
--- a/arch/arm64/kernel/debug-monitors.c
+++ b/arch/arm64/kernel/debug-monitors.c
@@ -141,17 +141,20 @@ postcore_initcall(debug_monitors_init);
/*
* Single step API and exception handling.
*/
-static void set_regs_spsr_ss(struct pt_regs *regs)
+static void set_user_regs_spsr_ss(struct user_pt_regs *regs)
{
regs->pstate |= DBG_SPSR_SS;
}
-NOKPROBE_SYMBOL(set_regs_spsr_ss);
+NOKPROBE_SYMBOL(set_user_regs_spsr_ss);
-static void clear_regs_spsr_ss(struct pt_regs *regs)
+static void clear_user_regs_spsr_ss(struct user_pt_regs *regs)
{
regs->pstate &= ~DBG_SPSR_SS;
}
-NOKPROBE_SYMBOL(clear_regs_spsr_ss);
+NOKPROBE_SYMBOL(clear_user_regs_spsr_ss);
+
+#define set_regs_spsr_ss(r) set_user_regs_spsr_ss(&(r)->user_regs)
+#define clear_regs_spsr_ss(r) clear_user_regs_spsr_ss(&(r)->user_regs)
static DEFINE_SPINLOCK(debug_hook_lock);
static LIST_HEAD(user_step_hook);
@@ -402,6 +405,15 @@ void user_fastforward_single_step(struct task_struct *task)
clear_regs_spsr_ss(task_pt_regs(task));
}
+void user_regs_reset_single_step(struct user_pt_regs *regs,
+ struct task_struct *task)
+{
+ if (test_tsk_thread_flag(task, TIF_SINGLESTEP))
+ set_user_regs_spsr_ss(regs);
+ else
+ clear_user_regs_spsr_ss(regs);
+}
+
/* Kernel API */
void kernel_enable_single_step(struct pt_regs *regs)
{
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 057d4aa1af4d..22f9053b55b6 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -1947,8 +1947,8 @@ static int valid_native_regs(struct user_pt_regs *regs)
*/
int valid_user_regs(struct user_pt_regs *regs, struct task_struct *task)
{
- if (!test_tsk_thread_flag(task, TIF_SINGLESTEP))
- regs->pstate &= ~DBG_SPSR_SS;
+ /* https://lore.kernel.org/lkml/20191118131525.GA4180@willie-the-truck */
+ user_regs_reset_single_step(regs, task);
if (is_compat_thread(task_thread_info(task)))
return valid_compat_regs(regs);
--
2.27.0.383.g050319c2ae-goog
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v3 3/7] arm64: compat: Ensure upper 32 bits of x0 are zero on syscall return
[not found] <20200710130702.30658-1-will@kernel.org>
2020-07-10 13:06 ` [PATCH v3 1/7] arm64: ptrace: Consistently use pseudo-singlestep exceptions Will Deacon
2020-07-10 13:06 ` [PATCH v3 2/7] arm64: ptrace: Override SPSR.SS when single-stepping is enabled Will Deacon
@ 2020-07-10 13:06 ` Will Deacon
2020-07-16 0:27 ` Sasha Levin
2 siblings, 1 reply; 6+ messages in thread
From: Will Deacon @ 2020-07-10 13:06 UTC (permalink / raw)
To: linux-arm-kernel
Cc: catalin.marinas, Will Deacon, kernel-team, Mark Rutland,
Luis Machado, Keno Fischer, Kees Cook, stable
Although we zero the upper bits of x0 on entry to the kernel from an
AArch32 task, we do not clear them on the exception return path and can
therefore expose 64-bit sign extended syscall return values to userspace
via interfaces such as the 'perf_regs' ABI, which deal exclusively with
64-bit registers.
Explicitly clear the upper 32 bits of x0 on return from a compat system
call.
Cc: <stable@vger.kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Keno Fischer <keno@juliacomputing.com>
Cc: Luis Machado <luis.machado@linaro.org>
Signed-off-by: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/syscall.h | 12 +++++++++++-
arch/arm64/kernel/syscall.c | 3 +++
2 files changed, 14 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/syscall.h
index 65299a2dcf9c..cfc0672013f6 100644
--- a/arch/arm64/include/asm/syscall.h
+++ b/arch/arm64/include/asm/syscall.h
@@ -34,6 +34,10 @@ static inline long syscall_get_error(struct task_struct *task,
struct pt_regs *regs)
{
unsigned long error = regs->regs[0];
+
+ if (is_compat_thread(task_thread_info(task)))
+ error = sign_extend64(error, 31);
+
return IS_ERR_VALUE(error) ? error : 0;
}
@@ -47,7 +51,13 @@ static inline void syscall_set_return_value(struct task_struct *task,
struct pt_regs *regs,
int error, long val)
{
- regs->regs[0] = (long) error ? error : val;
+ if (error)
+ val = error;
+
+ if (is_compat_thread(task_thread_info(task)))
+ val = lower_32_bits(val);
+
+ regs->regs[0] = val;
}
#define SYSCALL_MAX_ARGS 6
diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
index 7c14466a12af..98a26d4e7b0c 100644
--- a/arch/arm64/kernel/syscall.c
+++ b/arch/arm64/kernel/syscall.c
@@ -50,6 +50,9 @@ static void invoke_syscall(struct pt_regs *regs, unsigned int scno,
ret = do_ni_syscall(regs, scno);
}
+ if (is_compat_task())
+ ret = lower_32_bits(ret);
+
regs->regs[0] = ret;
}
--
2.27.0.383.g050319c2ae-goog
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v3 3/7] arm64: compat: Ensure upper 32 bits of x0 are zero on syscall return
2020-07-10 13:06 ` [PATCH v3 3/7] arm64: compat: Ensure upper 32 bits of x0 are zero on syscall return Will Deacon
@ 2020-07-16 0:27 ` Sasha Levin
0 siblings, 0 replies; 6+ messages in thread
From: Sasha Levin @ 2020-07-16 0:27 UTC (permalink / raw)
To: Sasha Levin, Will Deacon, linux-arm-kernel
Cc: catalin.marinas, Will Deacon, stable, Mark Rutland, Keno Fischer,
Luis Machado, stable
Hi
[This is an automated email]
This commit has been processed because it contains a -stable tag.
The stable tag indicates that it's relevant for the following trees: all
The bot has tested the following trees: v5.7.8, v5.4.51, v4.19.132, v4.14.188, v4.9.230, v4.4.230.
v5.7.8: Build OK!
v5.4.51: Build OK!
v4.19.132: Build OK!
v4.14.188: Failed to apply! Possible dependencies:
0013aceb30748 ("xtensa: clean up fixups in assembly code")
1af1e8a39dc0f ("xtensa: move fixmap and kmap just above the KSEG")
2a61f4747eeaa ("stack-protector: test compiler capability in Kconfig and drop AUTO mode")
2b8383927525d ("Makefile: move stack-protector compiler breakage test earlier")
2bc2f688fdf88 ("Makefile: move stack-protector availability out of Kconfig")
409d5db49867c ("arm64: rseq: Implement backend rseq calls and select HAVE_RSEQ")
40d1a07b333ef ("xtensa: enable stack protector")
4141c857fd09d ("arm64: convert raw syscall invocation to C")
44c6dc940b190 ("Makefile: introduce CONFIG_CC_STACKPROTECTOR_AUTO")
5cf97ebd8b40e ("xtensa: clean up functions in assembly code")
8d66772e869e7 ("arm64: Mask all exceptions during kernel_exit")
9800b9dc13cdf ("arm: Add restartable sequences support")
c2edb35ae342f ("xtensa: extract init_kio")
c633544a61541 ("xtensa: add support for KASAN")
d148eac0e70f0 ("Kbuild: rename HAVE_CC_STACKPROTECTOR config variable")
f4431396be5b2 ("xtensa: consolidate kernel stack size related definitions")
v4.9.230: Failed to apply! Possible dependencies:
12597988319c8 ("MIPS: Sort MIPS Kconfig Alphabetically.")
2a61f4747eeaa ("stack-protector: test compiler capability in Kconfig and drop AUTO mode")
2b8383927525d ("Makefile: move stack-protector compiler breakage test earlier")
2bc2f688fdf88 ("Makefile: move stack-protector availability out of Kconfig")
313dd1b629219 ("gcc-plugins: Add the randstruct plugin")
39c13c204bb11 ("arm: eBPF JIT compiler")
409d5db49867c ("arm64: rseq: Implement backend rseq calls and select HAVE_RSEQ")
4141c857fd09d ("arm64: convert raw syscall invocation to C")
44c6dc940b190 ("Makefile: introduce CONFIG_CC_STACKPROTECTOR_AUTO")
74d86a70636a0 ("arm: use set_memory.h header")
8d66772e869e7 ("arm64: Mask all exceptions during kernel_exit")
9800b9dc13cdf ("arm: Add restartable sequences support")
c02433dd6de32 ("arm64: split thread_info from task stack")
c61f13eaa1ee1 ("gcc-plugins: Add structleak for more stack initialization")
c763ea2650dfa ("x86/kconfig: Sort the 'config X86' selects alphabetically")
d148eac0e70f0 ("Kbuild: rename HAVE_CC_STACKPROTECTOR config variable")
f381bf6d82f03 ("MIPS: Add support for eBPF JIT.")
v4.4.230: Failed to apply! Possible dependencies:
218cfe4ed8885 ("perf/x86: Move perf_event_amd_ibs.c ....... => x86/events/amd/ibs.c")
25a77b55e74c4 ("xtensa/perf: Convert the hotplug notifier to state machine callbacks")
2a803c4db615d ("arm64: head.S: use memset to clear BSS")
2bf31a4a05f5b ("arm64: avoid dynamic relocations in early boot code")
3600c2fdc09a4 ("arm64: Add macros to read/write system registers")
39b0332a21583 ("perf/x86: Move perf_event_amd.c ........... => x86/events/amd/core.c")
4141c857fd09d ("arm64: convert raw syscall invocation to C")
499c81507f599 ("arm64/debug: Remove superfluous SMP function call")
49de0493e5f67 ("x86/perf/intel/cstate: Make cstate hotplug handling actually work")
4b6e2571bf000 ("x86/perf/intel/rapl: Make the Intel RAPL PMU driver modular")
5b26547dd7faa ("perf/x86: Move perf_event_amd_iommu.[ch] .. => x86/events/amd/iommu.[ch]")
609116d202a8c ("arm64: add function to install the idmap")
6cdf9c7ca687e ("arm64: Store struct thread_info in sp_el0")
77c34ef1c3194 ("perf/x86/intel/cstate: Convert Intel CSTATE to hotplug state machine")
8d66772e869e7 ("arm64: Mask all exceptions during kernel_exit")
9e8e865bbe294 ("arm64: unify idmap removal")
a563f7598198b ("arm64: Reuse TCR field definitions for EL1 and EL2")
adf7589997927 ("arm64: simplify sysreg manipulation")
ae7e27fe6834d ("arm64: hw_breakpoint: Allow EL2 breakpoints if running in HYP")
bb9052744f4b7 ("arm64: Handle early CPU boot failures")
c02433dd6de32 ("arm64: split thread_info from task stack")
c4bc34d20273d ("arm64: Add a helper for parking CPUs in a loop")
c7afba320e91c ("x86/perf/intel/cstate: Modularize driver")
e5b61bafe7047 ("arm: Convert VFP hotplug notifiers to state machine")
e633c65a1d585 ("x86/perf/intel/uncore: Make the Intel uncore PMU driver modular")
e937dd5782688 ("arm64: debug: convert OS lock CPU hotplug notifier to new infrastructure")
ee02a15919cf8 ("arm64: Introduce cpu_die_early")
fa9cbf320e996 ("perf/x86: Move perf_event.c ............... => x86/events/core.c")
fce6361fe9b0c ("arm64: Move cpu_die_early to smp.c")
NOTE: The patch will not be queued to stable trees until it is upstream.
How should we proceed with this patch?
--
Thanks
Sasha
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v3 1/7] arm64: ptrace: Consistently use pseudo-singlestep exceptions
2020-07-10 13:06 ` [PATCH v3 1/7] arm64: ptrace: Consistently use pseudo-singlestep exceptions Will Deacon
@ 2020-07-16 0:27 ` Sasha Levin
0 siblings, 0 replies; 6+ messages in thread
From: Sasha Levin @ 2020-07-16 0:27 UTC (permalink / raw)
To: Sasha Levin, Will Deacon, linux-arm-kernel
Cc: catalin.marinas, Will Deacon, stable, Mark Rutland, Luis Machado,
stable
Hi
[This is an automated email]
This commit has been processed because it contains a -stable tag.
The stable tag indicates that it's relevant for the following trees: all
The bot has tested the following trees: v5.7.8, v5.4.51, v4.19.132, v4.14.188, v4.9.230, v4.4.230.
v5.7.8: Build OK!
v5.4.51: Build OK!
v4.19.132: Build OK!
v4.14.188: Failed to apply! Possible dependencies:
0013aceb30748 ("xtensa: clean up fixups in assembly code")
1af1e8a39dc0f ("xtensa: move fixmap and kmap just above the KSEG")
2a61f4747eeaa ("stack-protector: test compiler capability in Kconfig and drop AUTO mode")
2b8383927525d ("Makefile: move stack-protector compiler breakage test earlier")
2bc2f688fdf88 ("Makefile: move stack-protector availability out of Kconfig")
409d5db49867c ("arm64: rseq: Implement backend rseq calls and select HAVE_RSEQ")
40d1a07b333ef ("xtensa: enable stack protector")
44c6dc940b190 ("Makefile: introduce CONFIG_CC_STACKPROTECTOR_AUTO")
5cf97ebd8b40e ("xtensa: clean up functions in assembly code")
8d66772e869e7 ("arm64: Mask all exceptions during kernel_exit")
9800b9dc13cdf ("arm: Add restartable sequences support")
a92d4d1454ab8 ("arm64: entry.S: move SError handling into a C function for future expansion")
c2edb35ae342f ("xtensa: extract init_kio")
c633544a61541 ("xtensa: add support for KASAN")
d148eac0e70f0 ("Kbuild: rename HAVE_CC_STACKPROTECTOR config variable")
f37099b6992a0 ("arm64: convert syscall trace logic to C")
f4431396be5b2 ("xtensa: consolidate kernel stack size related definitions")
v4.9.230: Failed to apply! Possible dependencies:
096683724cb2e ("arm64: unwind: avoid percpu indirection for irq stack")
12964443e8d19 ("arm64: add on_accessible_stack()")
17c2895860092 ("arm64: Abstract syscallno manipulation")
34be98f4944f9 ("arm64: kernel: remove {THREAD,IRQ_STACK}_START_SP")
35d0e6fb4d219 ("arm64: syscallno is secretly an int, make it official")
8018ba4edfd3a ("arm64: move SEGMENT_ALIGN to <asm/memory.h>")
872d8327ce898 ("arm64: add VMAP_STACK overflow detection")
9842ceae9fa8d ("arm64: Add uprobe support")
a92d4d1454ab8 ("arm64: entry.S: move SError handling into a C function for future expansion")
a9ea0017ebe88 ("arm64: factor out current_stack_pointer")
c02433dd6de32 ("arm64: split thread_info from task stack")
c7365330753c5 ("arm64: unwind: disregard frame.sp when validating frame pointer")
cf7de27ab3517 ("arm64/syscalls: Check address limit on user-mode return")
dbc9344a68e50 ("arm64: clean up THREAD_* definitions")
f37099b6992a0 ("arm64: convert syscall trace logic to C")
f60ad4edcf072 ("arm64: clean up irq stack definitions")
v4.4.230: Failed to apply! Possible dependencies:
0a28714c53fd4 ("arm64: Use PoU cache instr for I/D coherency")
0a8ea52c3eb15 ("arm64: Add HAVE_REGS_AND_STACK_ACCESS_API feature")
17c2895860092 ("arm64: Abstract syscallno manipulation")
2dd0e8d2d2a15 ("arm64: Kprobes with single stepping support")
35d0e6fb4d219 ("arm64: syscallno is secretly an int, make it official")
39a67d49ba353 ("arm64: kprobes instruction simulation support")
421dd6fa6709e ("arm64: factor work_pending state machine to C")
57f4959bad0a1 ("arm64: kernel: Add support for User Access Override")
872d8327ce898 ("arm64: add VMAP_STACK overflow detection")
9842ceae9fa8d ("arm64: Add uprobe support")
a92d4d1454ab8 ("arm64: entry.S: move SError handling into a C function for future expansion")
b11e5759bfac0 ("arm64: factor out entry stack manipulation")
cf7de27ab3517 ("arm64/syscalls: Check address limit on user-mode return")
d5370f7548754 ("arm64: prefetch: add alternative pattern for CPUs without a prefetcher")
d59bee8872311 ("arm64: Add more test functions to insn.c")
e04a28d45ff34 ("arm64: debug: re-enable irqs before sending breakpoint SIGTRAP")
f37099b6992a0 ("arm64: convert syscall trace logic to C")
NOTE: The patch will not be queued to stable trees until it is upstream.
How should we proceed with this patch?
--
Thanks
Sasha
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v3 2/7] arm64: ptrace: Override SPSR.SS when single-stepping is enabled
2020-07-10 13:06 ` [PATCH v3 2/7] arm64: ptrace: Override SPSR.SS when single-stepping is enabled Will Deacon
@ 2020-07-16 0:27 ` Sasha Levin
0 siblings, 0 replies; 6+ messages in thread
From: Sasha Levin @ 2020-07-16 0:27 UTC (permalink / raw)
To: Sasha Levin, Will Deacon, linux-arm-kernel
Cc: catalin.marinas, Will Deacon, stable, Mark Rutland, Keno Fischer,
stable
Hi
[This is an automated email]
This commit has been processed because it contains a -stable tag.
The stable tag indicates that it's relevant for the following trees: all
The bot has tested the following trees: v5.7.8, v5.4.51, v4.19.132, v4.14.188, v4.9.230, v4.4.230.
v5.7.8: Build OK!
v5.4.51: Build OK!
v4.19.132: Build OK!
v4.14.188: Build OK!
v4.9.230: Build OK!
v4.4.230: Failed to apply! Possible dependencies:
44b53f67c99d0 ("arm64: Blacklist non-kprobe-able symbol")
NOTE: The patch will not be queued to stable trees until it is upstream.
How should we proceed with this patch?
--
Thanks
Sasha
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2020-07-16 0:27 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20200710130702.30658-1-will@kernel.org>
2020-07-10 13:06 ` [PATCH v3 1/7] arm64: ptrace: Consistently use pseudo-singlestep exceptions Will Deacon
2020-07-16 0:27 ` Sasha Levin
2020-07-10 13:06 ` [PATCH v3 2/7] arm64: ptrace: Override SPSR.SS when single-stepping is enabled Will Deacon
2020-07-16 0:27 ` Sasha Levin
2020-07-10 13:06 ` [PATCH v3 3/7] arm64: compat: Ensure upper 32 bits of x0 are zero on syscall return Will Deacon
2020-07-16 0:27 ` Sasha Levin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).