From: Mark Rutland <mark.rutland@arm.com>
To: Jinjie Ruan <ruanjinjie@huawei.com>
Cc: catalin.marinas@arm.com, will@kernel.org, oleg@redhat.com,
tglx@linutronix.de, peterz@infradead.org, luto@kernel.org,
kees@kernel.org, wad@chromium.org, rostedt@goodmis.org,
arnd@arndb.de, ardb@kernel.org, broonie@kernel.org,
rick.p.edgecombe@intel.com, leobras@redhat.com,
linux-kernel@vger.kernel.org,
linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH v3 0/3] arm64: entry: Convert to generic entry
Date: Tue, 22 Oct 2024 14:37:41 +0100 [thread overview]
Message-ID: <Zxeqpa7n1r03KV7t@J2N7QTR9R3> (raw)
In-Reply-To: <ZxejvAmccYMTa4P1@J2N7QTR9R3>
On Tue, Oct 22, 2024 at 02:08:12PM +0100, Mark Rutland wrote:
> On Tue, Oct 22, 2024 at 08:07:54PM +0800, Jinjie Ruan wrote:
> > On 2024/10/17 23:25, Mark Rutland wrote:
> > > There's also some indirection that I don't think is necessary *and*
> > > hides important ordering concerns and results in mistakes. In
> > > particular, note that before this series, enter_from_kernel_mode() calls
> > > the (instrumentable) MTE checks *after* all the necessary lockdep+RCU
> > > management is performed by __enter_from_kernel_mode():
> > >
> > > static void noinstr enter_from_kernel_mode(struct pt_regs *regs)
> > > {
> > > __enter_from_kernel_mode(regs);
> > > mte_check_tfsr_entry();
> > > mte_disable_tco_entry(current);
> > > }
> > >
> > > ... whereas after this series is applied, those MTE checks are placed in
> > > arch_enter_from_kernel_mode(), which irqentry_enter() calls *before* the
> > > necessary lockdep+RCU management. That is broken.
> > >
> > > It would be better to keep that explicit in the arm64 entry code with
> > > arm64-specific wrappers, e.g.
> > >
> > > static noinstr irqentry_state_t enter_from_kernel_mode(struct pt_regs *regs)
> > > {
> > > irqentry_state_t state = irqentry_enter(regs);
> > > mte_check_tfsr_entry();
> > > mte_disable_tco_entry(current);
> > >
> > > return state;
> > > }
> >
> > Hi, Mark, It seems that there is a problem for
> > arm64_preempt_schedule_irq() when wrap irqentry_exit() with
> > exit_to_kernel_mode().
> >
> > The arm64_preempt_schedule_irq() is about PREEMPT_DYNAMIC and preempt
> > irq which is the raw_irqentry_exit_cond_resched() in generic code called
> > by irqentry_exit().
> >
> > Only __el1_irq() call arm64_preempt_schedule_irq(), but when we switch
> > all exit_to_kernel_mode() to arm64-specific one that wrap
> > irqentry_exit(), not only __el1_irq() but also el1_abort(), el1_pc(),
> > el1_undef() etc. will try to reschedule by calling
> > arm64_preempt_schedule_irq() similar logic.
>
> Yes, the generic entry code will preempt any context where an interrupt
> *could* have been taken from.
>
> I'm not sure it actually matters either way; I believe that the generic
> entry code was written this way because that's what x86 did, and
> checking for whether interrupts are enabled in the interrupted context
> is cheap.
>
> I's suggest you first write a patch to align arm64's entry code with the
> generic code, by removing the call to arm64_preempt_schedule_irq() from
> __el1_irq(), and adding a call to arm64_preempt_schedule_irq() in
> __exit_to_kernel_mode(), e.g.
>
> | static __always_inline void __exit_to_kernel_mode(struct pt_regs *regs)
> | {
> | ...
> | if (interrupts_enabled(regs)) {
> | ...
> | if (regs->exit_rcu) {
> | ...
> | }
> | ...
> | arm64_preempt_schedule_irq();
> | ...
> | } else {
> | ...
> | }
> | }
>
> That way the behaviour and structure will be more aligned with the
> generic code, and with that as an independent patch it will be easier to
> review/test/bisect/etc.
>
> This change will have at least two key impacts:
>
> (1) We'll preempt even without taking a "real" interrupt. That
> shouldn't result in preemption that wasn't possible before,
> but it does change the probability of preempting at certain points,
> and might have a performance impact, so probably warrants a
> benchmark.
>
> (2) We will not preempt when taking interrupts from a region of kernel
> code where IRQs are enabled but RCU is not watching, matching the
> behaviour of the generic entry code.
>
> This has the potential to introduce livelock if we can ever have a
> screaming interrupt in such a region, so we'll need to go figure out
> whether that's actually a problem.
>
> Having this as a separate patch will make it easier to test/bisect
> for that specifically.
Looking some more, the pseudo-NMI DAIF check in
arm64_preempt_schedule_irq() is going to be a problem, becuase the
IRQ/NMI path manages DAIF distinctly from all other exception handlers.
I suspect we'll need to factor that out more in the generic entry code.
Mark.
next prev parent reply other threads:[~2024-10-22 13:42 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-29 8:55 [PATCH v3 0/3] arm64: entry: Convert to generic entry Jinjie Ruan
2024-06-29 8:55 ` [PATCH v3 1/3] entry: Add some arch funcs to support arm64 to use " Jinjie Ruan
2024-08-20 11:41 ` Kevin Brodsky
2024-08-22 12:36 ` Jinjie Ruan
2024-08-26 15:55 ` Kevin Brodsky
2024-06-29 8:56 ` [PATCH v3 2/3] arm64: Prepare to switch to " Jinjie Ruan
2024-08-20 11:42 ` Kevin Brodsky
2024-08-20 12:57 ` Jinjie Ruan
2024-06-29 8:56 ` [PATCH v3 3/3] arm64: entry: Convert " Jinjie Ruan
2024-07-23 1:39 ` Jinjie Ruan
2024-08-20 11:43 ` Kevin Brodsky
2024-08-22 12:30 ` Jinjie Ruan
2024-08-26 15:56 ` Kevin Brodsky
2024-09-18 1:59 ` Jinjie Ruan
2024-09-18 8:04 ` Mark Rutland
2024-07-01 15:40 ` [PATCH v3 0/3] " Kees Cook
2024-07-02 1:01 ` Jinjie Ruan
2024-07-02 20:59 ` Kees Cook
2024-07-07 19:20 ` Thomas Gleixner
2024-10-17 15:25 ` Mark Rutland
2024-10-21 8:30 ` Jinjie Ruan
2024-10-21 9:45 ` Mark Rutland
2024-10-22 12:07 ` Jinjie Ruan
2024-10-22 13:08 ` Mark Rutland
2024-10-22 13:37 ` Mark Rutland [this message]
2024-10-22 13:58 ` Russell King (Oracle)
2024-11-01 23:01 ` Linus Walleij
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Zxeqpa7n1r03KV7t@J2N7QTR9R3 \
--to=mark.rutland@arm.com \
--cc=ardb@kernel.org \
--cc=arnd@arndb.de \
--cc=broonie@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=kees@kernel.org \
--cc=leobras@redhat.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=luto@kernel.org \
--cc=oleg@redhat.com \
--cc=peterz@infradead.org \
--cc=rick.p.edgecombe@intel.com \
--cc=rostedt@goodmis.org \
--cc=ruanjinjie@huawei.com \
--cc=tglx@linutronix.de \
--cc=wad@chromium.org \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox