* [PATCH v2 0/3] arm64: entry: Convert to generic entry
@ 2024-06-27 8:12 Jinjie Ruan
2024-06-27 8:12 ` [PATCH v2 1/3] entry: Add some arch funcs to support arm64 to use " Jinjie Ruan
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: Jinjie Ruan @ 2024-06-27 8:12 UTC (permalink / raw)
To: catalin.marinas, will, oleg, tglx, peterz, luto, kees, wad,
ruanjinjie, rostedt, arnd, ardb, broonie, mark.rutland,
rick.p.edgecombe, leobras, linux-kernel, linux-arm-kernel
Currently, x86, Riscv, Loongarch use the generic entry. Convert arm64
to use the generic entry infrastructure from kernel/entry/*. The generic
entry makes maintainers' work easier and codes more elegant, which aslo
removed a lot of duplicate code.
Changes in v2:
- Add tested-by.
- Fix a bug that not call arch_post_report_syscall_entry() in
syscall_trace_enter() if ptrace_report_syscall_entry() return not zero.
- Refactor report_syscall().
- Add comment for arch_prepare_report_syscall_exit().
- Adjust entry-common.h header file inclusion to alphabetical order.
- Update the commit message.
Jinjie Ruan (3):
entry: Add some arch funcs to support arm64 to use generic entry
arm64: Prepare to switch to generic entry
arm64: entry: Convert to generic entry
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/entry-common.h | 60 +++++
arch/arm64/include/asm/ptrace.h | 5 +
arch/arm64/include/asm/stacktrace.h | 5 +-
arch/arm64/include/asm/syscall.h | 6 +-
arch/arm64/include/asm/thread_info.h | 23 +-
arch/arm64/kernel/entry-common.c | 355 ++++++--------------------
arch/arm64/kernel/ptrace.c | 81 +++---
arch/arm64/kernel/signal.c | 3 +-
arch/arm64/kernel/syscall.c | 18 +-
include/linux/entry-common.h | 51 ++++
kernel/entry/common.c | 48 +++-
12 files changed, 294 insertions(+), 362 deletions(-)
create mode 100644 arch/arm64/include/asm/entry-common.h
--
2.34.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v2 1/3] entry: Add some arch funcs to support arm64 to use generic entry
2024-06-27 8:12 [PATCH v2 0/3] arm64: entry: Convert to generic entry Jinjie Ruan
@ 2024-06-27 8:12 ` Jinjie Ruan
2024-06-27 8:12 ` [PATCH v2 2/3] arm64: Prepare to switch to " Jinjie Ruan
2024-06-27 8:12 ` [PATCH v2 3/3] arm64: entry: Convert " Jinjie Ruan
2 siblings, 0 replies; 9+ messages in thread
From: Jinjie Ruan @ 2024-06-27 8:12 UTC (permalink / raw)
To: catalin.marinas, will, oleg, tglx, peterz, luto, kees, wad,
ruanjinjie, rostedt, arnd, ardb, broonie, mark.rutland,
rick.p.edgecombe, leobras, linux-kernel, linux-arm-kernel
Add some arch functions to support arm64 to use generic entry, which do not
affect existing architectures that use generic entry:
- arch_prepare/post_report_syscall_entry/exit().
- arch_enter_from_kernel_mode(), arch_exit_to_kernel_mode_prepare().
- arch_irqentry_exit_need_resched() to support architecture-related
need_resched() check logic.
Also make report_single_step() and syscall_exit_work() not static, which
can be used by arm64 later.
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
v2:
- Fix a bug that not call arch_post_report_syscall_entry() in
syscall_trace_enter() if ptrace_report_syscall_entry() return not zero.
- Update the commit message.
---
include/linux/entry-common.h | 51 ++++++++++++++++++++++++++++++++++++
kernel/entry/common.c | 48 ++++++++++++++++++++++++++++-----
2 files changed, 93 insertions(+), 6 deletions(-)
diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
index b0fb775a600d..1be4c3d91995 100644
--- a/include/linux/entry-common.h
+++ b/include/linux/entry-common.h
@@ -84,6 +84,18 @@ static __always_inline void arch_enter_from_user_mode(struct pt_regs *regs);
static __always_inline void arch_enter_from_user_mode(struct pt_regs *regs) {}
#endif
+static __always_inline void arch_enter_from_kernel_mode(struct pt_regs *regs);
+
+#ifndef arch_enter_from_kernel_mode
+static __always_inline void arch_enter_from_kernel_mode(struct pt_regs *regs) {}
+#endif
+
+static __always_inline void arch_exit_to_kernel_mode_prepare(struct pt_regs *regs);
+
+#ifndef arch_exit_to_kernel_mode_prepare
+static __always_inline void arch_exit_to_kernel_mode_prepare(struct pt_regs *regs) {}
+#endif
+
/**
* enter_from_user_mode - Establish state when coming from user mode
*
@@ -298,6 +310,42 @@ static __always_inline void arch_exit_to_user_mode(void) { }
*/
void arch_do_signal_or_restart(struct pt_regs *regs);
+/**
+ * arch_irqentry_exit_need_resched - Architecture specific need resched function
+ */
+bool arch_irqentry_exit_need_resched(void);
+
+/**
+ * arch_prepare_report_syscall_entry - Architecture specific report_syscall_entry()
+ * prepare function
+ */
+unsigned long arch_prepare_report_syscall_entry(struct pt_regs *regs);
+
+/**
+ * arch_post_report_syscall_entry - Architecture specific report_syscall_entry()
+ * post function
+ */
+void arch_post_report_syscall_entry(struct pt_regs *regs, unsigned long saved_reg);
+
+/**
+ * arch_prepare_report_syscall_exit - Architecture specific report_syscall_exit()
+ * prepare function
+ */
+unsigned long arch_prepare_report_syscall_exit(struct pt_regs *regs, unsigned long work);
+
+/**
+ * arch_post_report_syscall_exit - Architecture specific report_syscall_exit()
+ * post function
+ */
+void arch_post_report_syscall_exit(struct pt_regs *regs, unsigned long saved_reg,
+ unsigned long work);
+
+/**
+ * arch_forget_syscall - Architecture specific function called if
+ * ptrace_report_syscall_entry() return nonzero
+ */
+void arch_forget_syscall(struct pt_regs *regs);
+
/**
* exit_to_user_mode_loop - do any pending work before leaving to user space
*/
@@ -552,4 +600,7 @@ irqentry_state_t noinstr irqentry_nmi_enter(struct pt_regs *regs);
*/
void noinstr irqentry_nmi_exit(struct pt_regs *regs, irqentry_state_t irq_state);
+bool report_single_step(unsigned long work);
+void syscall_exit_work(struct pt_regs *regs, unsigned long work);
+
#endif
diff --git a/kernel/entry/common.c b/kernel/entry/common.c
index 90843cc38588..625b63e947cb 100644
--- a/kernel/entry/common.c
+++ b/kernel/entry/common.c
@@ -25,9 +25,14 @@ static inline void syscall_enter_audit(struct pt_regs *regs, long syscall)
}
}
+unsigned long __weak arch_prepare_report_syscall_entry(struct pt_regs *regs) { return 0; }
+void __weak arch_post_report_syscall_entry(struct pt_regs *regs, unsigned long saved_reg) { }
+void __weak arch_forget_syscall(struct pt_regs *regs) { };
+
long syscall_trace_enter(struct pt_regs *regs, long syscall,
unsigned long work)
{
+ unsigned long saved_reg;
long ret = 0;
/*
@@ -42,8 +47,13 @@ long syscall_trace_enter(struct pt_regs *regs, long syscall,
/* Handle ptrace */
if (work & (SYSCALL_WORK_SYSCALL_TRACE | SYSCALL_WORK_SYSCALL_EMU)) {
+ saved_reg = arch_prepare_report_syscall_entry(regs);
ret = ptrace_report_syscall_entry(regs);
- if (ret || (work & SYSCALL_WORK_SYSCALL_EMU))
+ if (ret)
+ arch_forget_syscall(regs);
+
+ arch_post_report_syscall_entry(regs, saved_reg);
+ if (ret || work & SYSCALL_WORK_SYSCALL_EMU)
return -1L;
}
@@ -138,7 +148,7 @@ __always_inline unsigned long exit_to_user_mode_loop(struct pt_regs *regs,
* SINGLESTEP is set (i.e. PTRACE_SYSEMU_SINGLESTEP). This syscall
* instruction has been already reported in syscall_enter_from_user_mode().
*/
-static inline bool report_single_step(unsigned long work)
+inline bool report_single_step(unsigned long work)
{
if (work & SYSCALL_WORK_SYSCALL_EMU)
return false;
@@ -146,8 +156,22 @@ static inline bool report_single_step(unsigned long work)
return work & SYSCALL_WORK_SYSCALL_EXIT_TRAP;
}
-static void syscall_exit_work(struct pt_regs *regs, unsigned long work)
+unsigned long __weak arch_prepare_report_syscall_exit(struct pt_regs *regs,
+ unsigned long work)
+{
+ return 0;
+}
+
+void __weak arch_post_report_syscall_exit(struct pt_regs *regs,
+ unsigned long saved_reg,
+ unsigned long work)
+{
+
+}
+
+void syscall_exit_work(struct pt_regs *regs, unsigned long work)
{
+ unsigned long saved_reg;
bool step;
/*
@@ -169,8 +193,11 @@ static void syscall_exit_work(struct pt_regs *regs, unsigned long work)
trace_sys_exit(regs, syscall_get_return_value(current, regs));
step = report_single_step(work);
- if (step || work & SYSCALL_WORK_SYSCALL_TRACE)
+ if (step || work & SYSCALL_WORK_SYSCALL_TRACE) {
+ saved_reg = arch_prepare_report_syscall_exit(regs, work);
ptrace_report_syscall_exit(regs, step);
+ arch_post_report_syscall_exit(regs, saved_reg, work);
+ }
}
/*
@@ -244,6 +271,8 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs)
return ret;
}
+ arch_enter_from_kernel_mode(regs);
+
/*
* If this entry hit the idle task invoke ct_irq_enter() whether
* RCU is watching or not.
@@ -300,6 +329,8 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs)
return ret;
}
+bool __weak arch_irqentry_exit_need_resched(void) { return true; }
+
void raw_irqentry_exit_cond_resched(void)
{
if (!preempt_count()) {
@@ -307,7 +338,7 @@ void raw_irqentry_exit_cond_resched(void)
rcu_irq_exit_check_preempt();
if (IS_ENABLED(CONFIG_DEBUG_ENTRY))
WARN_ON_ONCE(!on_thread_stack());
- if (need_resched())
+ if (need_resched() && arch_irqentry_exit_need_resched())
preempt_schedule_irq();
}
}
@@ -332,7 +363,12 @@ noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state)
/* Check whether this returns to user mode */
if (user_mode(regs)) {
irqentry_exit_to_user_mode(regs);
- } else if (!regs_irqs_disabled(regs)) {
+ return;
+ }
+
+ arch_exit_to_kernel_mode_prepare(regs);
+
+ if (!regs_irqs_disabled(regs)) {
/*
* If RCU was not watching on entry this needs to be done
* carefully and needs the same ordering of lockdep/tracing
--
2.34.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v2 2/3] arm64: Prepare to switch to generic entry
2024-06-27 8:12 [PATCH v2 0/3] arm64: entry: Convert to generic entry Jinjie Ruan
2024-06-27 8:12 ` [PATCH v2 1/3] entry: Add some arch funcs to support arm64 to use " Jinjie Ruan
@ 2024-06-27 8:12 ` Jinjie Ruan
2024-06-27 8:12 ` [PATCH v2 3/3] arm64: entry: Convert " Jinjie Ruan
2 siblings, 0 replies; 9+ messages in thread
From: Jinjie Ruan @ 2024-06-27 8:12 UTC (permalink / raw)
To: catalin.marinas, will, oleg, tglx, peterz, luto, kees, wad,
ruanjinjie, rostedt, arnd, ardb, broonie, mark.rutland,
rick.p.edgecombe, leobras, linux-kernel, linux-arm-kernel
Prepare to switch to generic entry for arm64:
- Implement regs_irqs_disabled() using interrupts_enabled() macro.
- Make on_thread_stack() compatible with generic entry.
- Split report_syscall() to report_syscall_enter() and
report_syscall_exit() to make it clear to switch to generic entry.
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
---
v2:
- Refactor report_syscall().
- Update the commit message.
---
arch/arm64/include/asm/ptrace.h | 5 +++++
arch/arm64/include/asm/stacktrace.h | 5 ++++-
arch/arm64/kernel/ptrace.c | 29 ++++++++++++++++++++---------
3 files changed, 29 insertions(+), 10 deletions(-)
diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h
index 47ec58031f11..1857748ff017 100644
--- a/arch/arm64/include/asm/ptrace.h
+++ b/arch/arm64/include/asm/ptrace.h
@@ -360,6 +360,11 @@ static inline unsigned long regs_get_kernel_argument(struct pt_regs *regs,
return 0;
}
+static inline int regs_irqs_disabled(struct pt_regs *regs)
+{
+ return !interrupts_enabled(regs);
+}
+
/* We must avoid circular header include via sched.h */
struct task_struct;
int valid_user_regs(struct user_pt_regs *regs, struct task_struct *task);
diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h
index 66ec8caa6ac0..36bc1831f906 100644
--- a/arch/arm64/include/asm/stacktrace.h
+++ b/arch/arm64/include/asm/stacktrace.h
@@ -57,7 +57,10 @@ static inline bool on_task_stack(const struct task_struct *tsk,
return stackinfo_on_stack(&info, sp, size);
}
-#define on_thread_stack() (on_task_stack(current, current_stack_pointer, 1))
+static __always_inline bool on_thread_stack(void)
+{
+ return on_task_stack(current, current_stack_pointer, 1);
+}
#ifdef CONFIG_VMAP_STACK
DECLARE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overflow_stack);
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 0d022599eb61..60fd85d5119d 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -2184,7 +2184,7 @@ enum ptrace_syscall_dir {
PTRACE_SYSCALL_EXIT,
};
-static void report_syscall(struct pt_regs *regs, enum ptrace_syscall_dir dir)
+static void report_syscall_enter(struct pt_regs *regs)
{
int regno;
unsigned long saved_reg;
@@ -2207,13 +2207,24 @@ static void report_syscall(struct pt_regs *regs, enum ptrace_syscall_dir dir)
*/
regno = (is_compat_task() ? 12 : 7);
saved_reg = regs->regs[regno];
- regs->regs[regno] = dir;
+ regs->regs[regno] = PTRACE_SYSCALL_ENTER;
- if (dir == PTRACE_SYSCALL_ENTER) {
- if (ptrace_report_syscall_entry(regs))
- forget_syscall(regs);
- regs->regs[regno] = saved_reg;
- } else if (!test_thread_flag(TIF_SINGLESTEP)) {
+ if (ptrace_report_syscall_entry(regs))
+ forget_syscall(regs);
+ regs->regs[regno] = saved_reg;
+}
+
+static void report_syscall_exit(struct pt_regs *regs)
+{
+ int regno;
+ unsigned long saved_reg;
+
+ /* See comment for report_syscall_enter() */
+ regno = (is_compat_task() ? 12 : 7);
+ saved_reg = regs->regs[regno];
+ regs->regs[regno] = PTRACE_SYSCALL_EXIT;
+
+ if (!test_thread_flag(TIF_SINGLESTEP)) {
ptrace_report_syscall_exit(regs, 0);
regs->regs[regno] = saved_reg;
} else {
@@ -2233,7 +2244,7 @@ int syscall_trace_enter(struct pt_regs *regs)
unsigned long flags = read_thread_flags();
if (flags & (_TIF_SYSCALL_EMU | _TIF_SYSCALL_TRACE)) {
- report_syscall(regs, PTRACE_SYSCALL_ENTER);
+ report_syscall_enter(regs);
if (flags & _TIF_SYSCALL_EMU)
return NO_SYSCALL;
}
@@ -2261,7 +2272,7 @@ void syscall_trace_exit(struct pt_regs *regs)
trace_sys_exit(regs, syscall_get_return_value(current, regs));
if (flags & (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP))
- report_syscall(regs, PTRACE_SYSCALL_EXIT);
+ report_syscall_exit(regs);
rseq_syscall(regs);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v2 3/3] arm64: entry: Convert to generic entry
2024-06-27 8:12 [PATCH v2 0/3] arm64: entry: Convert to generic entry Jinjie Ruan
2024-06-27 8:12 ` [PATCH v2 1/3] entry: Add some arch funcs to support arm64 to use " Jinjie Ruan
2024-06-27 8:12 ` [PATCH v2 2/3] arm64: Prepare to switch to " Jinjie Ruan
@ 2024-06-27 8:12 ` Jinjie Ruan
2024-06-27 17:01 ` Kees Cook
2 siblings, 1 reply; 9+ messages in thread
From: Jinjie Ruan @ 2024-06-27 8:12 UTC (permalink / raw)
To: catalin.marinas, will, oleg, tglx, peterz, luto, kees, wad,
ruanjinjie, rostedt, arnd, ardb, broonie, mark.rutland,
rick.p.edgecombe, leobras, linux-kernel, linux-arm-kernel
Currently, x86, Riscv, Loongarch use the generic entry. Convert arm64
to use the generic entry infrastructure from kernel/entry/*. The generic
entry makes maintainers' work easier and codes more elegant, which also
removed duplicate 150+ LOC. The changes are below:
- Remove TIF_SYSCALL_* flag, _TIF_WORK_MASK, _TIF_SYSCALL_WORK
- Remove syscall_trace_enter/exit() and use generic one.
- Remove *enter_from/exit_to_kernel_mode(), and wrap with generic
irqentry_enter/exit().
- Remove *enter_from/exit_to_user_mode(), and wrap with generic
irqentry_enter_from/exit_to_user_mode().
- Remove arm64_enter/exit_nmi() and use generic irqentry_nmi_enter/exit().
- Remove PREEMPT_DYNAMIC code, as generic entry will do it ok by
implementing arch_irqentry_exit_need_resched().
Tested ok with following test cases on Qemu cortex-a53 and HiSilicon
Kunpeng-920:
- Run `perf top` command
- Switch between different `dynamic preempt` mode
- Use `pseudo nmi`
- stress-ng CPU stress test.
Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Tested-by: Jinjie Ruan <ruanjinjie@huawei.com>
Tested-by: Kees Cook <kees@kernel.org>
---
v2:
- Add Tested-by.
- Rebased on the refactored report_syscall() code.
- Add comment for arch_prepare_report_syscall_exit().
- Update the commit message.
- Adjust entry-common.h header file inclusion to alphabetical order.
---
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/entry-common.h | 60 +++++
arch/arm64/include/asm/syscall.h | 6 +-
arch/arm64/include/asm/thread_info.h | 23 +-
arch/arm64/kernel/entry-common.c | 355 ++++++--------------------
arch/arm64/kernel/ptrace.c | 72 ++----
arch/arm64/kernel/signal.c | 3 +-
arch/arm64/kernel/syscall.c | 18 +-
8 files changed, 182 insertions(+), 356 deletions(-)
create mode 100644 arch/arm64/include/asm/entry-common.h
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 5d91259ee7b5..e6ccc5ea06fe 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -138,6 +138,7 @@ config ARM64
select GENERIC_CPU_DEVICES
select GENERIC_CPU_VULNERABILITIES
select GENERIC_EARLY_IOREMAP
+ select GENERIC_ENTRY
select GENERIC_IDLE_POLL_SETUP
select GENERIC_IOREMAP
select GENERIC_IRQ_IPI
diff --git a/arch/arm64/include/asm/entry-common.h b/arch/arm64/include/asm/entry-common.h
new file mode 100644
index 000000000000..a7eb4fdfc42b
--- /dev/null
+++ b/arch/arm64/include/asm/entry-common.h
@@ -0,0 +1,60 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef _ASM_ARM64_ENTRY_COMMON_H
+#define _ASM_ARM64_ENTRY_COMMON_H
+
+#include <linux/sched/signal.h>
+
+#include <asm/daifflags.h>
+#include <asm/fpsimd.h>
+#include <asm/mte.h>
+#include <asm/stacktrace.h>
+
+#define ARCH_EXIT_TO_USER_MODE_WORK (_TIF_MTE_ASYNC_FAULT | _TIF_FOREIGN_FPSTATE)
+
+static __always_inline void arch_enter_from_user_mode(struct pt_regs *regs)
+{
+ mte_disable_tco_entry(current);
+}
+
+#define arch_enter_from_user_mode arch_enter_from_user_mode
+
+static inline void arch_exit_to_user_mode_work(struct pt_regs *regs,
+ unsigned long ti_work)
+{
+ if (ti_work & _TIF_MTE_ASYNC_FAULT) {
+ clear_thread_flag(TIF_MTE_ASYNC_FAULT);
+ send_sig_fault(SIGSEGV, SEGV_MTEAERR, (void __user *)NULL, current);
+ }
+
+ if (ti_work & _TIF_FOREIGN_FPSTATE)
+ fpsimd_restore_current_state();
+}
+
+#define arch_exit_to_user_mode_work arch_exit_to_user_mode_work
+
+static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs,
+ unsigned long ti_work)
+{
+ local_daif_mask();
+ mte_check_tfsr_exit();
+}
+
+#define arch_exit_to_user_mode_prepare arch_exit_to_user_mode_prepare
+
+static __always_inline void arch_enter_from_kernel_mode(struct pt_regs *regs)
+{
+ mte_check_tfsr_entry();
+ mte_disable_tco_entry(current);
+}
+
+#define arch_enter_from_kernel_mode arch_enter_from_kernel_mode
+
+static __always_inline void arch_exit_to_kernel_mode_prepare(struct pt_regs *regs)
+{
+ mte_check_tfsr_exit();
+}
+
+#define arch_exit_to_kernel_mode_prepare arch_exit_to_kernel_mode_prepare
+
+#endif /* _ASM_ARM64_ENTRY_COMMON_H */
diff --git a/arch/arm64/include/asm/syscall.h b/arch/arm64/include/asm/syscall.h
index ab8e14b96f68..9891b15da4c3 100644
--- a/arch/arm64/include/asm/syscall.h
+++ b/arch/arm64/include/asm/syscall.h
@@ -85,7 +85,9 @@ static inline int syscall_get_arch(struct task_struct *task)
return AUDIT_ARCH_AARCH64;
}
-int syscall_trace_enter(struct pt_regs *regs);
-void syscall_trace_exit(struct pt_regs *regs);
+static inline bool arch_syscall_is_vdso_sigreturn(struct pt_regs *regs)
+{
+ return false;
+}
#endif /* __ASM_SYSCALL_H */
diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index e72a3bf9e563..ec5d74c53bf9 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -43,6 +43,7 @@ struct thread_info {
void *scs_sp;
#endif
u32 cpu;
+ unsigned long syscall_work; /* SYSCALL_WORK_ flags */
};
#define thread_saved_pc(tsk) \
@@ -64,11 +65,6 @@ void arch_setup_new_exec(void);
#define TIF_UPROBE 4 /* uprobe breakpoint or singlestep */
#define TIF_MTE_ASYNC_FAULT 5 /* MTE Asynchronous Tag Check Fault */
#define TIF_NOTIFY_SIGNAL 6 /* signal notifications exist */
-#define TIF_SYSCALL_TRACE 8 /* syscall trace active */
-#define TIF_SYSCALL_AUDIT 9 /* syscall auditing */
-#define TIF_SYSCALL_TRACEPOINT 10 /* syscall tracepoint for ftrace */
-#define TIF_SECCOMP 11 /* syscall secure computing */
-#define TIF_SYSCALL_EMU 12 /* syscall emulation active */
#define TIF_MEMDIE 18 /* is terminating due to OOM killer */
#define TIF_FREEZE 19
#define TIF_RESTORE_SIGMASK 20
@@ -86,27 +82,12 @@ void arch_setup_new_exec(void);
#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_FOREIGN_FPSTATE (1 << TIF_FOREIGN_FPSTATE)
-#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
-#define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)
-#define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT)
-#define _TIF_SECCOMP (1 << TIF_SECCOMP)
-#define _TIF_SYSCALL_EMU (1 << TIF_SYSCALL_EMU)
-#define _TIF_UPROBE (1 << TIF_UPROBE)
-#define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP)
+#define _TIF_UPROBE (1 << TIF_UPROBE)
#define _TIF_32BIT (1 << TIF_32BIT)
#define _TIF_SVE (1 << TIF_SVE)
#define _TIF_MTE_ASYNC_FAULT (1 << TIF_MTE_ASYNC_FAULT)
#define _TIF_NOTIFY_SIGNAL (1 << TIF_NOTIFY_SIGNAL)
-#define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
- _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE | \
- _TIF_UPROBE | _TIF_MTE_ASYNC_FAULT | \
- _TIF_NOTIFY_SIGNAL)
-
-#define _TIF_SYSCALL_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
- _TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \
- _TIF_SYSCALL_EMU)
-
#ifdef CONFIG_SHADOW_CALL_STACK
#define INIT_SCS \
.scs_base = init_shadow_call_stack, \
diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c
index b77a15955f28..784ca7ec47d6 100644
--- a/arch/arm64/kernel/entry-common.c
+++ b/arch/arm64/kernel/entry-common.c
@@ -14,6 +14,7 @@
#include <linux/sched.h>
#include <linux/sched/debug.h>
#include <linux/thread_info.h>
+#include <linux/entry-common.h>
#include <asm/cpufeature.h>
#include <asm/daifflags.h>
@@ -28,201 +29,15 @@
#include <asm/sysreg.h>
#include <asm/system_misc.h>
-/*
- * Handle IRQ/context state management when entering from kernel mode.
- * Before this function is called it is not safe to call regular kernel code,
- * instrumentable code, or any code which may trigger an exception.
- *
- * This is intended to match the logic in irqentry_enter(), handling the kernel
- * mode transitions only.
- */
-static __always_inline void __enter_from_kernel_mode(struct pt_regs *regs)
-{
- regs->exit_rcu = false;
-
- if (!IS_ENABLED(CONFIG_TINY_RCU) && is_idle_task(current)) {
- lockdep_hardirqs_off(CALLER_ADDR0);
- ct_irq_enter();
- trace_hardirqs_off_finish();
-
- regs->exit_rcu = true;
- return;
- }
-
- lockdep_hardirqs_off(CALLER_ADDR0);
- rcu_irq_enter_check_tick();
- trace_hardirqs_off_finish();
-}
-
-static void noinstr enter_from_kernel_mode(struct pt_regs *regs)
+static __always_inline void exit_to_user_mode_wrapper(struct pt_regs *regs)
{
- __enter_from_kernel_mode(regs);
- mte_check_tfsr_entry();
- mte_disable_tco_entry(current);
-}
-
-/*
- * Handle IRQ/context state management when exiting to kernel mode.
- * After this function returns it is not safe to call regular kernel code,
- * instrumentable code, or any code which may trigger an exception.
- *
- * This is intended to match the logic in irqentry_exit(), handling the kernel
- * mode transitions only, and with preemption handled elsewhere.
- */
-static __always_inline void __exit_to_kernel_mode(struct pt_regs *regs)
-{
- lockdep_assert_irqs_disabled();
-
- if (interrupts_enabled(regs)) {
- if (regs->exit_rcu) {
- trace_hardirqs_on_prepare();
- lockdep_hardirqs_on_prepare();
- ct_irq_exit();
- lockdep_hardirqs_on(CALLER_ADDR0);
- return;
- }
-
- trace_hardirqs_on();
- } else {
- if (regs->exit_rcu)
- ct_irq_exit();
- }
-}
-
-static void noinstr exit_to_kernel_mode(struct pt_regs *regs)
-{
- mte_check_tfsr_exit();
- __exit_to_kernel_mode(regs);
-}
-
-/*
- * Handle IRQ/context state management when entering from user mode.
- * Before this function is called it is not safe to call regular kernel code,
- * instrumentable code, or any code which may trigger an exception.
- */
-static __always_inline void __enter_from_user_mode(void)
-{
- lockdep_hardirqs_off(CALLER_ADDR0);
- CT_WARN_ON(ct_state() != CONTEXT_USER);
- user_exit_irqoff();
- trace_hardirqs_off_finish();
- mte_disable_tco_entry(current);
-}
-
-static __always_inline void enter_from_user_mode(struct pt_regs *regs)
-{
- __enter_from_user_mode();
-}
-
-/*
- * Handle IRQ/context state management when exiting to user mode.
- * After this function returns it is not safe to call regular kernel code,
- * instrumentable code, or any code which may trigger an exception.
- */
-static __always_inline void __exit_to_user_mode(void)
-{
- trace_hardirqs_on_prepare();
- lockdep_hardirqs_on_prepare();
- user_enter_irqoff();
- lockdep_hardirqs_on(CALLER_ADDR0);
-}
-
-static void do_notify_resume(struct pt_regs *regs, unsigned long thread_flags)
-{
- do {
- local_irq_enable();
-
- if (thread_flags & _TIF_NEED_RESCHED)
- schedule();
-
- if (thread_flags & _TIF_UPROBE)
- uprobe_notify_resume(regs);
-
- if (thread_flags & _TIF_MTE_ASYNC_FAULT) {
- clear_thread_flag(TIF_MTE_ASYNC_FAULT);
- send_sig_fault(SIGSEGV, SEGV_MTEAERR,
- (void __user *)NULL, current);
- }
-
- if (thread_flags & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL))
- do_signal(regs);
-
- if (thread_flags & _TIF_NOTIFY_RESUME)
- resume_user_mode_work(regs);
-
- if (thread_flags & _TIF_FOREIGN_FPSTATE)
- fpsimd_restore_current_state();
-
- local_irq_disable();
- thread_flags = read_thread_flags();
- } while (thread_flags & _TIF_WORK_MASK);
-}
-
-static __always_inline void exit_to_user_mode_prepare(struct pt_regs *regs)
-{
- unsigned long flags;
-
local_irq_disable();
-
- flags = read_thread_flags();
- if (unlikely(flags & _TIF_WORK_MASK))
- do_notify_resume(regs, flags);
-
- local_daif_mask();
-
- lockdep_sys_exit();
-}
-
-static __always_inline void exit_to_user_mode(struct pt_regs *regs)
-{
- exit_to_user_mode_prepare(regs);
- mte_check_tfsr_exit();
- __exit_to_user_mode();
+ irqentry_exit_to_user_mode(regs);
}
asmlinkage void noinstr asm_exit_to_user_mode(struct pt_regs *regs)
{
- exit_to_user_mode(regs);
-}
-
-/*
- * Handle IRQ/context state management when entering an NMI from user/kernel
- * mode. Before this function is called it is not safe to call regular kernel
- * code, instrumentable code, or any code which may trigger an exception.
- */
-static void noinstr arm64_enter_nmi(struct pt_regs *regs)
-{
- regs->lockdep_hardirqs = lockdep_hardirqs_enabled();
-
- __nmi_enter();
- lockdep_hardirqs_off(CALLER_ADDR0);
- lockdep_hardirq_enter();
- ct_nmi_enter();
-
- trace_hardirqs_off_finish();
- ftrace_nmi_enter();
-}
-
-/*
- * Handle IRQ/context state management when exiting an NMI from user/kernel
- * mode. After this function returns it is not safe to call regular kernel
- * code, instrumentable code, or any code which may trigger an exception.
- */
-static void noinstr arm64_exit_nmi(struct pt_regs *regs)
-{
- bool restore = regs->lockdep_hardirqs;
-
- ftrace_nmi_exit();
- if (restore) {
- trace_hardirqs_on_prepare();
- lockdep_hardirqs_on_prepare();
- }
-
- ct_nmi_exit();
- lockdep_hardirq_exit();
- if (restore)
- lockdep_hardirqs_on(CALLER_ADDR0);
- __nmi_exit();
+ exit_to_user_mode_wrapper(regs);
}
/*
@@ -259,27 +74,8 @@ static void noinstr arm64_exit_el1_dbg(struct pt_regs *regs)
lockdep_hardirqs_on(CALLER_ADDR0);
}
-#ifdef CONFIG_PREEMPT_DYNAMIC
-DEFINE_STATIC_KEY_TRUE(sk_dynamic_irqentry_exit_cond_resched);
-#define need_irq_preemption() \
- (static_branch_unlikely(&sk_dynamic_irqentry_exit_cond_resched))
-#else
-#define need_irq_preemption() (IS_ENABLED(CONFIG_PREEMPTION))
-#endif
-
-static void __sched arm64_preempt_schedule_irq(void)
+bool arch_irqentry_exit_need_resched(void)
{
- if (!need_irq_preemption())
- return;
-
- /*
- * Note: thread_info::preempt_count includes both thread_info::count
- * and thread_info::need_resched, and is not equivalent to
- * preempt_count().
- */
- if (READ_ONCE(current_thread_info()->preempt_count) != 0)
- return;
-
/*
* DAIF.DA are cleared at the start of IRQ/FIQ handling, and when GIC
* priority masking is used the GIC irqchip driver will clear DAIF.IF
@@ -287,7 +83,7 @@ static void __sched arm64_preempt_schedule_irq(void)
* DAIF we must have handled an NMI, so skip preemption.
*/
if (system_uses_irq_prio_masking() && read_sysreg(daif))
- return;
+ return false;
/*
* Preempting a task from an IRQ means we leave copies of PSTATE
@@ -297,8 +93,10 @@ static void __sched arm64_preempt_schedule_irq(void)
* Only allow a task to be preempted once cpufeatures have been
* enabled.
*/
- if (system_capabilities_finalized())
- preempt_schedule_irq();
+ if (!system_capabilities_finalized())
+ return false;
+
+ return true;
}
static void do_interrupt_handler(struct pt_regs *regs,
@@ -320,7 +118,7 @@ extern void (*handle_arch_fiq)(struct pt_regs *);
static void noinstr __panic_unhandled(struct pt_regs *regs, const char *vector,
unsigned long esr)
{
- arm64_enter_nmi(regs);
+ irqentry_nmi_enter(regs);
console_verbose();
@@ -426,41 +224,43 @@ UNHANDLED(el1t, 64, error)
static void noinstr el1_abort(struct pt_regs *regs, unsigned long esr)
{
unsigned long far = read_sysreg(far_el1);
+ irqentry_state_t state = irqentry_enter(regs);
- enter_from_kernel_mode(regs);
local_daif_inherit(regs);
do_mem_abort(far, esr, regs);
local_daif_mask();
- exit_to_kernel_mode(regs);
+ irqentry_exit(regs, state);
}
static void noinstr el1_pc(struct pt_regs *regs, unsigned long esr)
{
unsigned long far = read_sysreg(far_el1);
+ irqentry_state_t state = irqentry_enter(regs);
- enter_from_kernel_mode(regs);
local_daif_inherit(regs);
do_sp_pc_abort(far, esr, regs);
local_daif_mask();
- exit_to_kernel_mode(regs);
+ irqentry_exit(regs, state);
}
static void noinstr el1_undef(struct pt_regs *regs, unsigned long esr)
{
- enter_from_kernel_mode(regs);
+ irqentry_state_t state = irqentry_enter(regs);
+
local_daif_inherit(regs);
do_el1_undef(regs, esr);
local_daif_mask();
- exit_to_kernel_mode(regs);
+ irqentry_exit(regs, state);
}
static void noinstr el1_bti(struct pt_regs *regs, unsigned long esr)
{
- enter_from_kernel_mode(regs);
+ irqentry_state_t state = irqentry_enter(regs);
+
local_daif_inherit(regs);
do_el1_bti(regs, esr);
local_daif_mask();
- exit_to_kernel_mode(regs);
+ irqentry_exit(regs, state);
}
static void noinstr el1_dbg(struct pt_regs *regs, unsigned long esr)
@@ -475,11 +275,12 @@ static void noinstr el1_dbg(struct pt_regs *regs, unsigned long esr)
static void noinstr el1_fpac(struct pt_regs *regs, unsigned long esr)
{
- enter_from_kernel_mode(regs);
+ irqentry_state_t state = irqentry_enter(regs);
+
local_daif_inherit(regs);
do_el1_fpac(regs, esr);
local_daif_mask();
- exit_to_kernel_mode(regs);
+ irqentry_exit(regs, state);
}
asmlinkage void noinstr el1h_64_sync_handler(struct pt_regs *regs)
@@ -522,23 +323,22 @@ asmlinkage void noinstr el1h_64_sync_handler(struct pt_regs *regs)
static __always_inline void __el1_pnmi(struct pt_regs *regs,
void (*handler)(struct pt_regs *))
{
- arm64_enter_nmi(regs);
+ irqentry_state_t state = irqentry_nmi_enter(regs);
+
do_interrupt_handler(regs, handler);
- arm64_exit_nmi(regs);
+ irqentry_nmi_exit(regs, state);
}
static __always_inline void __el1_irq(struct pt_regs *regs,
void (*handler)(struct pt_regs *))
{
- enter_from_kernel_mode(regs);
+ irqentry_state_t state = irqentry_enter(regs);
irq_enter_rcu();
do_interrupt_handler(regs, handler);
irq_exit_rcu();
- arm64_preempt_schedule_irq();
-
- exit_to_kernel_mode(regs);
+ irqentry_exit(regs, state);
}
static void noinstr el1_interrupt(struct pt_regs *regs,
void (*handler)(struct pt_regs *))
@@ -564,21 +364,22 @@ asmlinkage void noinstr el1h_64_fiq_handler(struct pt_regs *regs)
asmlinkage void noinstr el1h_64_error_handler(struct pt_regs *regs)
{
unsigned long esr = read_sysreg(esr_el1);
+ irqentry_state_t state;
local_daif_restore(DAIF_ERRCTX);
- arm64_enter_nmi(regs);
+ state = irqentry_nmi_enter(regs);
do_serror(regs, esr);
- arm64_exit_nmi(regs);
+ irqentry_nmi_exit(regs, state);
}
static void noinstr el0_da(struct pt_regs *regs, unsigned long esr)
{
unsigned long far = read_sysreg(far_el1);
- enter_from_user_mode(regs);
+ irqentry_enter_from_user_mode(regs);
local_daif_restore(DAIF_PROCCTX);
do_mem_abort(far, esr, regs);
- exit_to_user_mode(regs);
+ exit_to_user_mode_wrapper(regs);
}
static void noinstr el0_ia(struct pt_regs *regs, unsigned long esr)
@@ -593,50 +394,50 @@ static void noinstr el0_ia(struct pt_regs *regs, unsigned long esr)
if (!is_ttbr0_addr(far))
arm64_apply_bp_hardening();
- enter_from_user_mode(regs);
+ irqentry_enter_from_user_mode(regs);
local_daif_restore(DAIF_PROCCTX);
do_mem_abort(far, esr, regs);
- exit_to_user_mode(regs);
+ exit_to_user_mode_wrapper(regs);
}
static void noinstr el0_fpsimd_acc(struct pt_regs *regs, unsigned long esr)
{
- enter_from_user_mode(regs);
+ irqentry_enter_from_user_mode(regs);
local_daif_restore(DAIF_PROCCTX);
do_fpsimd_acc(esr, regs);
- exit_to_user_mode(regs);
+ exit_to_user_mode_wrapper(regs);
}
static void noinstr el0_sve_acc(struct pt_regs *regs, unsigned long esr)
{
- enter_from_user_mode(regs);
+ irqentry_enter_from_user_mode(regs);
local_daif_restore(DAIF_PROCCTX);
do_sve_acc(esr, regs);
- exit_to_user_mode(regs);
+ exit_to_user_mode_wrapper(regs);
}
static void noinstr el0_sme_acc(struct pt_regs *regs, unsigned long esr)
{
- enter_from_user_mode(regs);
+ irqentry_enter_from_user_mode(regs);
local_daif_restore(DAIF_PROCCTX);
do_sme_acc(esr, regs);
- exit_to_user_mode(regs);
+ exit_to_user_mode_wrapper(regs);
}
static void noinstr el0_fpsimd_exc(struct pt_regs *regs, unsigned long esr)
{
- enter_from_user_mode(regs);
+ irqentry_enter_from_user_mode(regs);
local_daif_restore(DAIF_PROCCTX);
do_fpsimd_exc(esr, regs);
- exit_to_user_mode(regs);
+ exit_to_user_mode_wrapper(regs);
}
static void noinstr el0_sys(struct pt_regs *regs, unsigned long esr)
{
- enter_from_user_mode(regs);
+ irqentry_enter_from_user_mode(regs);
local_daif_restore(DAIF_PROCCTX);
do_el0_sys(esr, regs);
- exit_to_user_mode(regs);
+ exit_to_user_mode_wrapper(regs);
}
static void noinstr el0_pc(struct pt_regs *regs, unsigned long esr)
@@ -646,50 +447,50 @@ static void noinstr el0_pc(struct pt_regs *regs, unsigned long esr)
if (!is_ttbr0_addr(instruction_pointer(regs)))
arm64_apply_bp_hardening();
- enter_from_user_mode(regs);
+ irqentry_enter_from_user_mode(regs);
local_daif_restore(DAIF_PROCCTX);
do_sp_pc_abort(far, esr, regs);
- exit_to_user_mode(regs);
+ exit_to_user_mode_wrapper(regs);
}
static void noinstr el0_sp(struct pt_regs *regs, unsigned long esr)
{
- enter_from_user_mode(regs);
+ irqentry_enter_from_user_mode(regs);
local_daif_restore(DAIF_PROCCTX);
do_sp_pc_abort(regs->sp, esr, regs);
- exit_to_user_mode(regs);
+ exit_to_user_mode_wrapper(regs);
}
static void noinstr el0_undef(struct pt_regs *regs, unsigned long esr)
{
- enter_from_user_mode(regs);
+ irqentry_enter_from_user_mode(regs);
local_daif_restore(DAIF_PROCCTX);
do_el0_undef(regs, esr);
- exit_to_user_mode(regs);
+ exit_to_user_mode_wrapper(regs);
}
static void noinstr el0_bti(struct pt_regs *regs)
{
- enter_from_user_mode(regs);
+ irqentry_enter_from_user_mode(regs);
local_daif_restore(DAIF_PROCCTX);
do_el0_bti(regs);
- exit_to_user_mode(regs);
+ exit_to_user_mode_wrapper(regs);
}
static void noinstr el0_mops(struct pt_regs *regs, unsigned long esr)
{
- enter_from_user_mode(regs);
+ irqentry_enter_from_user_mode(regs);
local_daif_restore(DAIF_PROCCTX);
do_el0_mops(regs, esr);
- exit_to_user_mode(regs);
+ exit_to_user_mode_wrapper(regs);
}
static void noinstr el0_inv(struct pt_regs *regs, unsigned long esr)
{
- enter_from_user_mode(regs);
+ irqentry_enter_from_user_mode(regs);
local_daif_restore(DAIF_PROCCTX);
bad_el0_sync(regs, 0, esr);
- exit_to_user_mode(regs);
+ exit_to_user_mode_wrapper(regs);
}
static void noinstr el0_dbg(struct pt_regs *regs, unsigned long esr)
@@ -697,28 +498,28 @@ static void noinstr el0_dbg(struct pt_regs *regs, unsigned long esr)
/* Only watchpoints write FAR_EL1, otherwise its UNKNOWN */
unsigned long far = read_sysreg(far_el1);
- enter_from_user_mode(regs);
+ irqentry_enter_from_user_mode(regs);
do_debug_exception(far, esr, regs);
local_daif_restore(DAIF_PROCCTX);
- exit_to_user_mode(regs);
+ exit_to_user_mode_wrapper(regs);
}
static void noinstr el0_svc(struct pt_regs *regs)
{
- enter_from_user_mode(regs);
+ irqentry_enter_from_user_mode(regs);
cortex_a76_erratum_1463225_svc_handler();
fp_user_discard();
local_daif_restore(DAIF_PROCCTX);
do_el0_svc(regs);
- exit_to_user_mode(regs);
+ exit_to_user_mode_wrapper(regs);
}
static void noinstr el0_fpac(struct pt_regs *regs, unsigned long esr)
{
- enter_from_user_mode(regs);
+ irqentry_enter_from_user_mode(regs);
local_daif_restore(DAIF_PROCCTX);
do_el0_fpac(regs, esr);
- exit_to_user_mode(regs);
+ exit_to_user_mode_wrapper(regs);
}
asmlinkage void noinstr el0t_64_sync_handler(struct pt_regs *regs)
@@ -783,7 +584,7 @@ asmlinkage void noinstr el0t_64_sync_handler(struct pt_regs *regs)
static void noinstr el0_interrupt(struct pt_regs *regs,
void (*handler)(struct pt_regs *))
{
- enter_from_user_mode(regs);
+ irqentry_enter_from_user_mode(regs);
write_sysreg(DAIF_PROCCTX_NOIRQ, daif);
@@ -794,7 +595,7 @@ static void noinstr el0_interrupt(struct pt_regs *regs,
do_interrupt_handler(regs, handler);
irq_exit_rcu();
- exit_to_user_mode(regs);
+ exit_to_user_mode_wrapper(regs);
}
static void noinstr __el0_irq_handler_common(struct pt_regs *regs)
@@ -820,14 +621,15 @@ asmlinkage void noinstr el0t_64_fiq_handler(struct pt_regs *regs)
static void noinstr __el0_error_handler_common(struct pt_regs *regs)
{
unsigned long esr = read_sysreg(esr_el1);
+ irqentry_state_t state_nmi;
- enter_from_user_mode(regs);
+ irqentry_enter_from_user_mode(regs);
local_daif_restore(DAIF_ERRCTX);
- arm64_enter_nmi(regs);
+ state_nmi = irqentry_nmi_enter(regs);
do_serror(regs, esr);
- arm64_exit_nmi(regs);
+ irqentry_nmi_exit(regs, state_nmi);
local_daif_restore(DAIF_PROCCTX);
- exit_to_user_mode(regs);
+ exit_to_user_mode_wrapper(regs);
}
asmlinkage void noinstr el0t_64_error_handler(struct pt_regs *regs)
@@ -838,19 +640,19 @@ asmlinkage void noinstr el0t_64_error_handler(struct pt_regs *regs)
#ifdef CONFIG_COMPAT
static void noinstr el0_cp15(struct pt_regs *regs, unsigned long esr)
{
- enter_from_user_mode(regs);
+ irqentry_enter_from_user_mode(regs);
local_daif_restore(DAIF_PROCCTX);
do_el0_cp15(esr, regs);
- exit_to_user_mode(regs);
+ exit_to_user_mode_wrapper(regs);
}
static void noinstr el0_svc_compat(struct pt_regs *regs)
{
- enter_from_user_mode(regs);
+ irqentry_enter_from_user_mode(regs);
cortex_a76_erratum_1463225_svc_handler();
local_daif_restore(DAIF_PROCCTX);
do_el0_svc_compat(regs);
- exit_to_user_mode(regs);
+ exit_to_user_mode_wrapper(regs);
}
asmlinkage void noinstr el0t_32_sync_handler(struct pt_regs *regs)
@@ -924,7 +726,7 @@ asmlinkage void noinstr __noreturn handle_bad_stack(struct pt_regs *regs)
unsigned long esr = read_sysreg(esr_el1);
unsigned long far = read_sysreg(far_el1);
- arm64_enter_nmi(regs);
+ irqentry_nmi_enter(regs);
panic_bad_stack(regs, esr, far);
}
#endif /* CONFIG_VMAP_STACK */
@@ -933,6 +735,7 @@ asmlinkage void noinstr __noreturn handle_bad_stack(struct pt_regs *regs)
asmlinkage noinstr unsigned long
__sdei_handler(struct pt_regs *regs, struct sdei_registered_event *arg)
{
+ irqentry_state_t state;
unsigned long ret;
/*
@@ -957,9 +760,9 @@ __sdei_handler(struct pt_regs *regs, struct sdei_registered_event *arg)
else if (cpu_has_pan())
set_pstate_pan(0);
- arm64_enter_nmi(regs);
+ state = irqentry_nmi_enter(regs);
ret = do_sdei_event(regs, arg);
- arm64_exit_nmi(regs);
+ irqentry_nmi_exit(regs, state);
return ret;
}
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 60fd85d5119d..449c11af25ec 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -29,6 +29,7 @@
#include <linux/regset.h>
#include <linux/elf.h>
#include <linux/rseq.h>
+#include <linux/entry-common.h>
#include <asm/compat.h>
#include <asm/cpufeature.h>
@@ -41,9 +42,6 @@
#include <asm/traps.h>
#include <asm/system_misc.h>
-#define CREATE_TRACE_POINTS
-#include <trace/events/syscalls.h>
-
struct pt_regs_offset {
const char *name;
int offset;
@@ -2184,10 +2182,10 @@ enum ptrace_syscall_dir {
PTRACE_SYSCALL_EXIT,
};
-static void report_syscall_enter(struct pt_regs *regs)
+unsigned long arch_prepare_report_syscall_entry(struct pt_regs *regs)
{
- int regno;
unsigned long saved_reg;
+ int regno;
/*
* We have some ABI weirdness here in the way that we handle syscall
@@ -2209,72 +2207,50 @@ static void report_syscall_enter(struct pt_regs *regs)
saved_reg = regs->regs[regno];
regs->regs[regno] = PTRACE_SYSCALL_ENTER;
- if (ptrace_report_syscall_entry(regs))
- forget_syscall(regs);
+ return saved_reg;
+}
+
+void arch_post_report_syscall_entry(struct pt_regs *regs, unsigned long saved_reg)
+{
+ int regno = (is_compat_task() ? 12 : 7);
+
regs->regs[regno] = saved_reg;
}
-static void report_syscall_exit(struct pt_regs *regs)
+unsigned long arch_prepare_report_syscall_exit(struct pt_regs *regs, unsigned long work)
{
- int regno;
unsigned long saved_reg;
+ int regno;
- /* See comment for report_syscall_enter() */
+ /* See comment for arch_prepare_report_syscall_entry() */
regno = (is_compat_task() ? 12 : 7);
saved_reg = regs->regs[regno];
regs->regs[regno] = PTRACE_SYSCALL_EXIT;
- if (!test_thread_flag(TIF_SINGLESTEP)) {
- ptrace_report_syscall_exit(regs, 0);
- regs->regs[regno] = saved_reg;
- } else {
- regs->regs[regno] = saved_reg;
-
+ if (report_single_step(work)) {
/*
* Signal a pseudo-step exception since we are stepping but
* tracer modifications to the registers may have rewound the
* state machine.
*/
- ptrace_report_syscall_exit(regs, 1);
+ regs->regs[regno] = saved_reg;
}
+
+ return saved_reg;
}
-int syscall_trace_enter(struct pt_regs *regs)
+void arch_post_report_syscall_exit(struct pt_regs *regs, unsigned long saved_reg,
+ unsigned long work)
{
- unsigned long flags = read_thread_flags();
-
- if (flags & (_TIF_SYSCALL_EMU | _TIF_SYSCALL_TRACE)) {
- report_syscall_enter(regs);
- if (flags & _TIF_SYSCALL_EMU)
- return NO_SYSCALL;
- }
-
- /* Do the secure computing after ptrace; failures should be fast. */
- if (secure_computing() == -1)
- return NO_SYSCALL;
-
- if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
- trace_sys_enter(regs, regs->syscallno);
+ int regno = (is_compat_task() ? 12 : 7);
- audit_syscall_entry(regs->syscallno, regs->orig_x0, regs->regs[1],
- regs->regs[2], regs->regs[3]);
-
- return regs->syscallno;
+ if (!report_single_step(work))
+ regs->regs[regno] = saved_reg;
}
-void syscall_trace_exit(struct pt_regs *regs)
+void arch_forget_syscall(struct pt_regs *regs)
{
- unsigned long flags = read_thread_flags();
-
- audit_syscall_exit(regs);
-
- if (flags & _TIF_SYSCALL_TRACEPOINT)
- trace_sys_exit(regs, syscall_get_return_value(current, regs));
-
- if (flags & (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP))
- report_syscall_exit(regs);
-
- rseq_syscall(regs);
+ forget_syscall(regs);
}
/*
diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
index 4a77f4976e11..2982f6db6d96 100644
--- a/arch/arm64/kernel/signal.c
+++ b/arch/arm64/kernel/signal.c
@@ -19,6 +19,7 @@
#include <linux/ratelimit.h>
#include <linux/rseq.h>
#include <linux/syscalls.h>
+#include <linux/entry-common.h>
#include <asm/daifflags.h>
#include <asm/debug-monitors.h>
@@ -1266,7 +1267,7 @@ static void handle_signal(struct ksignal *ksig, struct pt_regs *regs)
* the kernel can handle, and then we build all the user-level signal handling
* stack-frames in one go after that.
*/
-void do_signal(struct pt_regs *regs)
+void arch_do_signal_or_restart(struct pt_regs *regs)
{
unsigned long continue_addr = 0, restart_addr = 0;
int retval = 0;
diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
index ad198262b981..160ac9d15c27 100644
--- a/arch/arm64/kernel/syscall.c
+++ b/arch/arm64/kernel/syscall.c
@@ -7,6 +7,7 @@
#include <linux/ptrace.h>
#include <linux/randomize_kstack.h>
#include <linux/syscalls.h>
+#include <linux/entry-common.h>
#include <asm/debug-monitors.h>
#include <asm/exception.h>
@@ -66,14 +67,15 @@ static void invoke_syscall(struct pt_regs *regs, unsigned int scno,
choose_random_kstack_offset(get_random_u16() & 0x1FF);
}
-static inline bool has_syscall_work(unsigned long flags)
+static inline bool has_syscall_work(unsigned long work)
{
- return unlikely(flags & _TIF_SYSCALL_WORK);
+ return unlikely(work & SYSCALL_WORK_ENTER);
}
static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
const syscall_fn_t syscall_table[])
{
+ unsigned long work = READ_ONCE(current_thread_info()->syscall_work);
unsigned long flags = read_thread_flags();
regs->orig_x0 = regs->regs[0];
@@ -107,7 +109,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
return;
}
- if (has_syscall_work(flags)) {
+ if (has_syscall_work(work)) {
/*
* The de-facto standard way to skip a system call using ptrace
* is to set the system call to -1 (NO_SYSCALL) and set x0 to a
@@ -125,7 +127,7 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
*/
if (scno == NO_SYSCALL)
syscall_set_return_value(current, regs, -ENOSYS, 0);
- scno = syscall_trace_enter(regs);
+ scno = syscall_trace_enter(regs, regs->syscallno, work);
if (scno == NO_SYSCALL)
goto trace_exit;
}
@@ -137,14 +139,14 @@ static void el0_svc_common(struct pt_regs *regs, int scno, int sc_nr,
* check again. However, if we were tracing entry, then we always trace
* exit regardless, as the old entry assembly did.
*/
- if (!has_syscall_work(flags) && !IS_ENABLED(CONFIG_DEBUG_RSEQ)) {
- flags = read_thread_flags();
- if (!has_syscall_work(flags) && !(flags & _TIF_SINGLESTEP))
+ if (!has_syscall_work(work) && !IS_ENABLED(CONFIG_DEBUG_RSEQ)) {
+ work = READ_ONCE(current_thread_info()->syscall_work);
+ if (!has_syscall_work(work) && !report_single_step(work))
return;
}
trace_exit:
- syscall_trace_exit(regs);
+ syscall_exit_work(regs, work);
}
void do_el0_svc(struct pt_regs *regs)
--
2.34.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v2 3/3] arm64: entry: Convert to generic entry
2024-06-27 8:12 ` [PATCH v2 3/3] arm64: entry: Convert " Jinjie Ruan
@ 2024-06-27 17:01 ` Kees Cook
2024-06-27 17:15 ` Mark Brown
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: Kees Cook @ 2024-06-27 17:01 UTC (permalink / raw)
To: Jinjie Ruan
Cc: catalin.marinas, will, oleg, tglx, peterz, luto, wad, rostedt,
arnd, ardb, broonie, mark.rutland, rick.p.edgecombe, leobras,
linux-kernel, linux-arm-kernel
On Thu, Jun 27, 2024 at 04:12:09PM +0800, Jinjie Ruan wrote:
> Tested ok with following test cases on Qemu cortex-a53 and HiSilicon
> Kunpeng-920:
> - Run `perf top` command
> - Switch between different `dynamic preempt` mode
> - Use `pseudo nmi`
> - stress-ng CPU stress test.
I think two other things to test would be the MTE functionality
(especially async mode), and kasan in general.
I've really struggled to get MTE working with qemu, so likely real
hardware would be needed for that... I'm hoping the ARM folks have
access to something that would work well for this. :)
-Kees
--
Kees Cook
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 3/3] arm64: entry: Convert to generic entry
2024-06-27 17:01 ` Kees Cook
@ 2024-06-27 17:15 ` Mark Brown
2024-06-27 18:24 ` Kees Cook
2024-06-28 3:20 ` Jinjie Ruan
2024-06-28 7:05 ` Jinjie Ruan
2 siblings, 1 reply; 9+ messages in thread
From: Mark Brown @ 2024-06-27 17:15 UTC (permalink / raw)
To: Kees Cook
Cc: Jinjie Ruan, catalin.marinas, will, oleg, tglx, peterz, luto, wad,
rostedt, arnd, ardb, mark.rutland, rick.p.edgecombe, leobras,
linux-kernel, linux-arm-kernel
[-- Attachment #1: Type: text/plain, Size: 502 bytes --]
On Thu, Jun 27, 2024 at 10:01:11AM -0700, Kees Cook wrote:
> I've really struggled to get MTE working with qemu, so likely real
> hardware would be needed for that... I'm hoping the ARM folks have
> access to something that would work well for this. :)
What issues have you been running into? You could also try the fast
model - https://shrinkwrap.docs.arm.com/en/latest/ packages up the
firmware building and execution stuff to make it more approachable.
Note however that fast is a relative term.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 3/3] arm64: entry: Convert to generic entry
2024-06-27 17:15 ` Mark Brown
@ 2024-06-27 18:24 ` Kees Cook
0 siblings, 0 replies; 9+ messages in thread
From: Kees Cook @ 2024-06-27 18:24 UTC (permalink / raw)
To: Mark Brown
Cc: Jinjie Ruan, catalin.marinas, will, oleg, tglx, peterz, luto, wad,
rostedt, arnd, ardb, mark.rutland, rick.p.edgecombe, leobras,
linux-kernel, linux-arm-kernel
On Thu, Jun 27, 2024 at 06:15:36PM +0100, Mark Brown wrote:
> On Thu, Jun 27, 2024 at 10:01:11AM -0700, Kees Cook wrote:
>
> > I've really struggled to get MTE working with qemu, so likely real
> > hardware would be needed for that... I'm hoping the ARM folks have
> > access to something that would work well for this. :)
>
> What issues have you been running into?
It was so slow to emulate that I couldn't finish booting.
However, looking at my qemu scripts, it seems I may have solved this at
some point. I remembered wrong! I can test MTE. :P
This is what I'm using currently:
-cpu max,pauth-impdef=on \
-machine virtualization=true \
-machine virt,gic-version=max,mte=on \
It seems PAC was the issue. Using "pauth-impdef=on" solved it. "The
architected QARMA5 and QARMA3 algorithms have good cryptographic
properties, but can be quite slow to emulate."
https://qemu-project.gitlab.io/qemu/system/arm/cpu-features.html
I will go test MTE with this series...
--
Kees Cook
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 3/3] arm64: entry: Convert to generic entry
2024-06-27 17:01 ` Kees Cook
2024-06-27 17:15 ` Mark Brown
@ 2024-06-28 3:20 ` Jinjie Ruan
2024-06-28 7:05 ` Jinjie Ruan
2 siblings, 0 replies; 9+ messages in thread
From: Jinjie Ruan @ 2024-06-28 3:20 UTC (permalink / raw)
To: Kees Cook
Cc: catalin.marinas, will, oleg, tglx, peterz, luto, wad, rostedt,
arnd, ardb, broonie, mark.rutland, rick.p.edgecombe, leobras,
linux-kernel, linux-arm-kernel
On 2024/6/28 1:01, Kees Cook wrote:
> On Thu, Jun 27, 2024 at 04:12:09PM +0800, Jinjie Ruan wrote:
>> Tested ok with following test cases on Qemu cortex-a53 and HiSilicon
>> Kunpeng-920:
>> - Run `perf top` command
>> - Switch between different `dynamic preempt` mode
>> - Use `pseudo nmi`
>> - stress-ng CPU stress test.
>
> I think two other things to test would be the MTE functionality
> (especially async mode), and kasan in general.
You are right, I'll test the MTE and kasan later, thank you!
>
> I've really struggled to get MTE working with qemu, so likely real
> hardware would be needed for that... I'm hoping the ARM folks have
> access to something that would work well for this. :)
>
> -Kees
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 3/3] arm64: entry: Convert to generic entry
2024-06-27 17:01 ` Kees Cook
2024-06-27 17:15 ` Mark Brown
2024-06-28 3:20 ` Jinjie Ruan
@ 2024-06-28 7:05 ` Jinjie Ruan
2 siblings, 0 replies; 9+ messages in thread
From: Jinjie Ruan @ 2024-06-28 7:05 UTC (permalink / raw)
To: Kees Cook
Cc: catalin.marinas, will, oleg, tglx, peterz, luto, wad, rostedt,
arnd, ardb, broonie, mark.rutland, rick.p.edgecombe, leobras,
linux-kernel, linux-arm-kernel
On 2024/6/28 1:01, Kees Cook wrote:
> On Thu, Jun 27, 2024 at 04:12:09PM +0800, Jinjie Ruan wrote:
>> Tested ok with following test cases on Qemu cortex-a53 and HiSilicon
>> Kunpeng-920:
>> - Run `perf top` command
>> - Switch between different `dynamic preempt` mode
>> - Use `pseudo nmi`
>> - stress-ng CPU stress test.
>
> I think two other things to test would be the MTE functionality
> (especially async mode), and kasan in general.
>
> I've really struggled to get MTE working with qemu, so likely real
> hardware would be needed for that... I'm hoping the ARM folks have
> access to something that would work well for this. :)
Hi, Kees
I run the following testcases which are mostly in
tools/testing/selftests/arm64/mte, the results is ok as below:
1、The simple mte test case in
Documentation/arch/arm64/memory-tagging-extension.rst, it pass:
# ./mte_test
a[0] = 1 a[1] = 2
0x200ffff9dfa3000
a[0] = 3 a[1] = 2
Expecting SIGSEGV...
Segmentation fault
2、
# cd tools/testing/selftests/arm64/mte/
# ./check_prctl pass:
TAP version 13
1..5
ok 1 check_basic_read
ok 2 NONE
ok 3 SYNC
ok 4 ASYNC
ok 5 SYNC+ASYNC
# Totals: pass:5 fail:0 xfail:0 xpass:0 skip:0 error:0
3、./check_tags_inclusion pass:
1..4
ok 1 Check an included tag value with sync mode
ok 2 Check different included tags value with sync mode
ok 3 Check none included tags value with sync mode
ok 4 Check all included tags value with sync mode
# Totals: pass:4 fail:0 xfail:0 xpass:0 skip:0 error:0
4、./check_user_mem pass:
1..64
ok 1 test type: read, MTE_SYNC_ERR, MAP_SHARED, tag len: 0, tag offset: 0
ok 2 test type: read, MTE_SYNC_ERR, MAP_SHARED, tag len: 0, tag offset: 16
ok 3 test type: read, MTE_SYNC_ERR, MAP_SHARED, tag len: 16, tag offset: 0
ok 4 test type: read, MTE_SYNC_ERR, MAP_SHARED, tag len: 16, tag offset: 16
ok 5 test type: read, MTE_SYNC_ERR, MAP_PRIVATE, tag len: 0, tag offset: 0
ok 6 test type: read, MTE_SYNC_ERR, MAP_PRIVATE, tag len: 0, tag offset: 16
ok 7 test type: read, MTE_SYNC_ERR, MAP_PRIVATE, tag len: 16, tag offset: 0
ok 8 test type: read, MTE_SYNC_ERR, MAP_PRIVATE, tag len: 16, tag offset: 16
ok 9 test type: read, MTE_ASYNC_ERR, MAP_SHARED, tag len: 0, tag offset: 0
ok 10 test type: read, MTE_ASYNC_ERR, MAP_SHARED, tag len: 0, tag offset: 16
ok 11 test type: read, MTE_ASYNC_ERR, MAP_SHARED, tag len: 16, tag offset: 0
ok 12 test type: read, MTE_ASYNC_ERR, MAP_SHARED, tag len: 16, tag
offset: 16
ok 13 test type: read, MTE_ASYNC_ERR, MAP_PRIVATE, tag len: 0, tag offset: 0
ok 14 test type: read, MTE_ASYNC_ERR, MAP_PRIVATE, tag len: 0, tag
offset: 16
ok 15 test type: read, MTE_ASYNC_ERR, MAP_PRIVATE, tag len: 16, tag
offset: 0
ok 16 test type: read, MTE_ASYNC_ERR, MAP_PRIVATE, tag len: 16, tag
offset: 16
ok 17 test type: write, MTE_SYNC_ERR, MAP_SHARED, tag len: 0, tag offset: 0
ok 18 test type: write, MTE_SYNC_ERR, MAP_SHARED, tag len: 0, tag offset: 16
ok 19 test type: write, MTE_SYNC_ERR, MAP_SHARED, tag len: 16, tag offset: 0
ok 20 test type: write, MTE_SYNC_ERR, MAP_SHARED, tag len: 16, tag
offset: 16
ok 21 test type: write, MTE_SYNC_ERR, MAP_PRIVATE, tag len: 0, tag offset: 0
ok 22 test type: write, MTE_SYNC_ERR, MAP_PRIVATE, tag len: 0, tag
offset: 16
ok 23 test type: write, MTE_SYNC_ERR, MAP_PRIVATE, tag len: 16, tag
offset: 0
ok 24 test type: write, MTE_SYNC_ERR, MAP_PRIVATE, tag len: 16, tag
offset: 16
ok 25 test type: write, MTE_ASYNC_ERR, MAP_SHARED, tag len: 0, tag offset: 0
ok 26 test type: write, MTE_ASYNC_ERR, MAP_SHARED, tag len: 0, tag
offset: 16
ok 27 test type: write, MTE_ASYNC_ERR, MAP_SHARED, tag len: 16, tag
offset: 0
ok 28 test type: write, MTE_ASYNC_ERR, MAP_SHARED, tag len: 16, tag
offset: 16
ok 29 test type: write, MTE_ASYNC_ERR, MAP_PRIVATE, tag len: 0, tag
offset: 0
ok 30 test type: write, MTE_ASYNC_ERR, MAP_PRIVATE, tag len: 0, tag
offset: 16
ok 31 test type: write, MTE_ASYNC_ERR, MAP_PRIVATE, tag len: 16, tag
offset: 0
ok 32 test type: write, MTE_ASYNC_ERR, MAP_PRIVATE, tag len: 16, tag
offset: 16
ok 33 test type: readv, MTE_SYNC_ERR, MAP_SHARED, tag len: 0, tag offset: 0
ok 34 test type: readv, MTE_SYNC_ERR, MAP_SHARED, tag len: 0, tag offset: 16
ok 35 test type: readv, MTE_SYNC_ERR, MAP_SHARED, tag len: 16, tag offset: 0
ok 36 test type: readv, MTE_SYNC_ERR, MAP_SHARED, tag len: 16, tag
offset: 16
ok 37 test type: readv, MTE_SYNC_ERR, MAP_PRIVATE, tag len: 0, tag offset: 0
ok 38 test type: readv, MTE_SYNC_ERR, MAP_PRIVATE, tag len: 0, tag
offset: 16
ok 39 test type: readv, MTE_SYNC_ERR, MAP_PRIVATE, tag len: 16, tag
offset: 0
ok 40 test type: readv, MTE_SYNC_ERR, MAP_PRIVATE, tag len: 16, tag
offset: 16
ok 41 test type: readv, MTE_ASYNC_ERR, MAP_SHARED, tag len: 0, tag offset: 0
ok 42 test type: readv, MTE_ASYNC_ERR, MAP_SHARED, tag len: 0, tag
offset: 16
ok 43 test type: readv, MTE_ASYNC_ERR, MAP_SHARED, tag len: 16, tag
offset: 0
ok 44 test type: readv, MTE_ASYNC_ERR, MAP_SHARED, tag len: 16, tag
offset: 16
ok 45 test type: readv, MTE_ASYNC_ERR, MAP_PRIVATE, tag len: 0, tag
offset: 0
ok 46 test type: readv, MTE_ASYNC_ERR, MAP_PRIVATE, tag len: 0, tag
offset: 16
ok 47 test type: readv, MTE_ASYNC_ERR, MAP_PRIVATE, tag len: 16, tag
offset: 0
ok 48 test type: readv, MTE_ASYNC_ERR, MAP_PRIVATE, tag len: 16, tag
offset: 16
ok 49 test type: writev, MTE_SYNC_ERR, MAP_SHARED, tag len: 0, tag offset: 0
ok 50 test type: writev, MTE_SYNC_ERR, MAP_SHARED, tag len: 0, tag
offset: 16
ok 51 test type: writev, MTE_SYNC_ERR, MAP_SHARED, tag len: 16, tag
offset: 0
ok 52 test type: writev, MTE_SYNC_ERR, MAP_SHARED, tag len: 16, tag
offset: 16
ok 53 test type: writev, MTE_SYNC_ERR, MAP_PRIVATE, tag len: 0, tag
offset: 0
ok 54 test type: writev, MTE_SYNC_ERR, MAP_PRIVATE, tag len: 0, tag
offset: 16
ok 55 test type: writev, MTE_SYNC_ERR, MAP_PRIVATE, tag len: 16, tag
offset: 0
ok 56 test type: writev, MTE_SYNC_ERR, MAP_PRIVATE, tag len: 16, tag
offset: 16
ok 57 test type: writev, MTE_ASYNC_ERR, MAP_SHARED, tag len: 0, tag
offset: 0
ok 58 test type: writev, MTE_ASYNC_ERR, MAP_SHARED, tag len: 0, tag
offset: 16
ok 59 test type: writev, MTE_ASYNC_ERR, MAP_SHARED, tag len: 16, tag
offset: 0
ok 60 test type: writev, MTE_ASYNC_ERR, MAP_SHARED, tag len: 16, tag
offset: 16
ok 61 test type: writev, MTE_ASYNC_ERR, MAP_PRIVATE, tag len: 0, tag
offset: 0
ok 62 test type: writev, MTE_ASYNC_ERR, MAP_PRIVATE, tag len: 0, tag
offset: 16
ok 63 test type: writev, MTE_ASYNC_ERR, MAP_PRIVATE, tag len: 16, tag
offset: 0
ok 64 test type: writev, MTE_ASYNC_ERR, MAP_PRIVATE, tag len: 16, tag
offset: 16
# Totals: pass:64 fail:0 xfail:0 xpass:0 skip:0 error:0
5、./check_mmap_options pass
1..22
ok 1 Check anonymous memory with private mapping, sync error mode, mmap
memory and tag check off
ok 2 Check file memory with private mapping, sync error mode,
mmap/mprotect memory and tag check off
ok 3 Check anonymous memory with private mapping, no error mode, mmap
memory and tag check off
ok 4 Check file memory with private mapping, no error mode,
mmap/mprotect memory and tag check off
ok 5 Check anonymous memory with private mapping, sync error mode, mmap
memory and tag check on
ok 6 Check anonymous memory with private mapping, sync error mode,
mmap/mprotect memory and tag check on
ok 7 Check anonymous memory with shared mapping, sync error mode, mmap
memory and tag check on
ok 8 Check anonymous memory with shared mapping, sync error mode,
mmap/mprotect memory and tag check on
ok 9 Check anonymous memory with private mapping, async error mode, mmap
memory and tag check on
ok 10 Check anonymous memory with private mapping, async error mode,
mmap/mprotect memory and tag check on
ok 11 Check anonymous memory with shared mapping, async error mode, mmap
memory and tag check on
ok 12 Check anonymous memory with shared mapping, async error mode,
mmap/mprotect memory and tag check on
ok 13 Check file memory with private mapping, sync error mode, mmap
memory and tag check on
ok 14 Check file memory with private mapping, sync error mode,
mmap/mprotect memory and tag check on
ok 15 Check file memory with shared mapping, sync error mode, mmap
memory and tag check on
ok 16 Check file memory with shared mapping, sync error mode,
mmap/mprotect memory and tag check on
ok 17 Check file memory with private mapping, async error mode, mmap
memory and tag check on
ok 18 Check file memory with private mapping, async error mode,
mmap/mprotect memory and tag check on
ok 19 Check file memory with shared mapping, async error mode, mmap
memory and tag check on
ok 20 Check file memory with shared mapping, async error mode,
mmap/mprotect memory and tag check on
ok 21 Check clear PROT_MTE flags with private mapping, sync error mode
and mmap memory
ok 22 Check clear PROT_MTE flags with private mapping and sync error
mode and mmap/mprotect memory
# Totals: pass:22 fail:0 xfail:0 xpass:0 skip:0 error:0
6、./check_ksm_options pass
1..4
ok 1 Check KSM mte page merge for private mapping, sync mode and mmap memory
ok 2 Check KSM mte page merge for private mapping, async mode and mmap
memory
ok 3 Check KSM mte page merge for shared mapping, sync mode and mmap memory
ok 4 Check KSM mte page merge for shared mapping, async mode and mmap memory
# Totals: pass:4 fail:0 xfail:0 xpass:0 skip:0 error:0
7、./check_gcr_el1_cswitch pass
1..1
ok 1 Verify that GCR_EL1 is set correctly on context switch
# Totals: pass:1 fail:0 xfail:0 xpass:0 skip:0 error:0
8、# ./check_buffer_fill pass
1..20
ok 1 Check buffer correctness by byte with sync err mode and mmap memory
ok 2 Check buffer correctness by byte with async err mode and mmap memory
ok 3 Check buffer correctness by byte with sync err mode and
mmap/mprotect memory
ok 4 Check buffer correctness by byte with async err mode and
mmap/mprotect memory
ok 5 Check buffer write underflow by byte with sync mode and mmap memory
ok 6 Check buffer write underflow by byte with async mode and mmap memory
ok 7 Check buffer write underflow by byte with tag check fault ignore
and mmap memory
ok 8 Check buffer write underflow by byte with sync mode and mmap memory
ok 9 Check buffer write underflow by byte with async mode and mmap memory
ok 10 Check buffer write underflow by byte with tag check fault ignore
and mmap memory
ok 11 Check buffer write overflow by byte with sync mode and mmap memory
ok 12 Check buffer write overflow by byte with async mode and mmap memory
ok 13 Check buffer write overflow by byte with tag fault ignore mode and
mmap memory
ok 14 Check buffer write correctness by block with sync mode and mmap memory
ok 15 Check buffer write correctness by block with async mode and mmap
memory
ok 16 Check buffer write correctness by block with tag fault ignore and
mmap memory
ok 17 Check initial tags with private mapping, sync error mode and mmap
memory
ok 18 Check initial tags with private mapping, sync error mode and
mmap/mprotect memory
ok 19 Check initial tags with shared mapping, sync error mode and mmap
memory
ok 20 Check initial tags with shared mapping, sync error mode and
mmap/mprotect memory
# Totals: pass:20 fail:0 xfail:0 xpass:0 skip:0 error:0
9、 ./check_child_memory pass
1..12
ok 1 Check child anonymous memory with private mapping, precise mode and
mmap memory
ok 2 Check child anonymous memory with shared mapping, precise mode and
mmap memory
ok 3 Check child anonymous memory with private mapping, imprecise mode
and mmap memory
ok 4 Check child anonymous memory with shared mapping, imprecise mode
and mmap memory
ok 5 Check child anonymous memory with private mapping, precise mode and
mmap/mprotect memory
ok 6 Check child anonymous memory with shared mapping, precise mode and
mmap/mprotect memory
ok 7 Check child file memory with private mapping, precise mode and mmap
memory
ok 8 Check child file memory with shared mapping, precise mode and mmap
memory
ok 9 Check child file memory with private mapping, imprecise mode and
mmap memory
ok 10 Check child file memory with shared mapping, imprecise mode and
mmap memory
ok 11 Check child file memory with private mapping, precise mode and
mmap/mprotect memory
ok 12 Check child file memory with shared mapping, precise mode and
mmap/mprotect memory
# Totals: pass:12 fail:0 xfail:0 xpass:0 skip:0 error:0
>
> -Kees
>
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2024-06-28 7:06 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-06-27 8:12 [PATCH v2 0/3] arm64: entry: Convert to generic entry Jinjie Ruan
2024-06-27 8:12 ` [PATCH v2 1/3] entry: Add some arch funcs to support arm64 to use " Jinjie Ruan
2024-06-27 8:12 ` [PATCH v2 2/3] arm64: Prepare to switch to " Jinjie Ruan
2024-06-27 8:12 ` [PATCH v2 3/3] arm64: entry: Convert " Jinjie Ruan
2024-06-27 17:01 ` Kees Cook
2024-06-27 17:15 ` Mark Brown
2024-06-27 18:24 ` Kees Cook
2024-06-28 3:20 ` Jinjie Ruan
2024-06-28 7:05 ` Jinjie Ruan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).