From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pf1-f178.google.com (mail-pf1-f178.google.com [209.85.210.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 51E113B8D4A for ; Mon, 27 Apr 2026 12:29:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.178 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777292955; cv=none; b=kw0FN/cCBYNW5qqEE7gQpW0mLP5nbDdL6R3QOVTut8EJnn+Jpx/oXuoOkLfi4QSQAYNHJV6zwUEYZvfp/QqpgvYc91zYk8qRhLYxXJfoon90Ill4lpJ7HxguX2GFk9C06v5U5Cg0itxJN/oOlp2Y2l0O9d/nPH6sNxxwbkToHUk= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777292955; c=relaxed/simple; bh=TFAdQzNCAP2iV5BjCMT/2yhH1Nyxw1TcKF6u7PMUpqA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pmKow6apU55VNpvN4wlAIlh5btApZKR6AL8LIZU3cBlOF+qMGe8ZAwz8jg8Uj1iJZDvoge8eR9CS7ux0U7GwmJqGB3dLOIAzNFCNdP2Hqeh1WTNjamC8KvI3//pWGGbNebUDKnssszK/q6znVJt+3okNhzdSZ3qp6pdHy/bt49E= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=U8jhA6bQ; arc=none smtp.client-ip=209.85.210.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="U8jhA6bQ" Received: by mail-pf1-f178.google.com with SMTP id d2e1a72fcca58-82735a41920so4001164b3a.2 for ; Mon, 27 Apr 2026 05:29:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1777292954; x=1777897754; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XARyapmtrilvR5w12XJiFpVGShufbq9BKuLcUgswpko=; b=U8jhA6bQO6NxRnl1hirVJKhHu7quK5rUbC4VVV4O63e1P6JcgBa/rGv9tuQCCk2qYm Ze1uho/EbdXbUujmBhRoF6Tx1XBOhKmFYlxXww4QeLwwIFlVXO7D7Lrf+roCOy1BZ7Ql UvGybaUK6XRw1jddTmU2SaigBO+IsmHLlE5gRKbxWrwIbWP3J7cWz+sM4jsYxfQ36Ozs rVttpXmBhcumW7hxtCNKyMcTgZhd6/KxBKb8BblRpMMjLhcE64WSW0sjvHVZMALNc0qG CF2B3WUyiGmOPThf1y/3DtMprqoVYHqlLwN4V2R+ZWuzWOHnEgY72vJPkjS5kgSHDI5K B7OQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1777292954; x=1777897754; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=XARyapmtrilvR5w12XJiFpVGShufbq9BKuLcUgswpko=; b=d/pyQ5erV3v1zW8lM9q3gL5MoatapFBisbz9liAdDVJ8x19Tw6cKhjaID0KHag/ZSB afwEJ5cV4FW1E6TaQOXEyU4WNAG6OSOgpp+xi9g/RpBQieEckpFWAj2xt8ShjG83oe8u pXy4hNJ97x210ACqksTYphP3NlA/UJPotXed+w+3KlMiLH8CmCRxHnxO7uMdLudEE9Y/ bxAemVidL0mO9Z2xqYZYkTH69UPQeH5XcMSwty775G9RpGugqJZ8Fhk/sIzdy/bc9/oh yDUjk5AmoJA1GZcYE2AF3iS7k5OI1V/oHIWkVi+reY/kEDK7xth02a8Yw1lSqhKhOF1C P24w== X-Forwarded-Encrypted: i=1; AFNElJ8OKO/mJA5ksDN38iAxmvJSRAyjte2lOw1fK6dt2eCWIbF97xBrQEw/luvnBXf8XKhK+VdiveQNovNOGUw=@vger.kernel.org X-Gm-Message-State: AOJu0Yweab4kIbm4MClm57o5UXcW3fN85vlcn0+zkpCjPb9YXiBUJUf4 9+6u673jjubYGc67rvar6ZAnQx6dTABJcj/JwH9RaekQRAbEEcOeQdlJ X-Gm-Gg: AeBDievLvvoKGH4dfYthai+lWDfV00/l97uyTY0zQyoJ3R2kSiQNtN2XDBiQ70WBh// 13Tb5b7CHt/J5PgOzDZehDLlFemSS3tAPTbgSNUhesPMg/63bzHxYeWZzGnLj/+I8E7b1niKPBh QVDpKM0PzPZHaO/KB9HGaPj1Wm7cFGNlR17TRjkSFgNrrt2jqqC3VIq9LSQNRIxiqCdByVSM7vp YU8DQ/RDhBeUoBficJ1odH+aSWiWqYkd0n34Kof5H9cJW0/HtySFUiy7iMtZ8MaDve2P0Jeo8HO Dc4/xZemGdfxp3e8Bd2z8ZV792sHq0p6a6XPJAy+dGWcPcxDGA4eIO+BSyGPFDDuo5Z1KOgL/UE DaY4eJRsaL7GZSSgKOjO7A5paUcdxybOJnAEIlAOC6Z2vhzywPA50Z5XxPbf1rCqv9oaCeiJ8/L +Fk2x6f7GicWEVEaySiXefoVwcQQu58/ytMKHUrRmWI5OWqsPPjs6fy4ktWLFN53oKpTMbqVljY 4AyMQ== X-Received: by 2002:a05:6a00:1307:b0:82a:7046:86a2 with SMTP id d2e1a72fcca58-82f8c7d1011mr43021858b3a.10.1777292953443; Mon, 27 Apr 2026 05:29:13 -0700 (PDT) Received: from li-1a3e774c-28e4-11b2-a85c-acc9f2883e29.ibm.com ([129.41.58.4]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-82f8e9f7735sm32733466b3a.21.2026.04.27.05.29.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Apr 2026 05:29:13 -0700 (PDT) From: "Mukesh Kumar Chaurasiya (IBM)" To: maddy@linux.ibm.com, mpe@ellerman.id.au, npiggin@gmail.com, chleroy@kernel.org, ryabinin.a.a@gmail.com, glider@google.com, andreyknvl@gmail.com, dvyukov@google.com, vincenzo.frascino@arm.com, oleg@redhat.com, kees@kernel.org, luto@amacapital.net, wad@chromium.org, mchauras@linux.ibm.com, sshegde@linux.ibm.com, thuth@redhat.com, ruanjinjie@huawei.com, akpm@linux-foundation.org, macro@orcam.me.uk, ldv@strace.io, charlie@rivosinc.com, deller@gmx.de, kevin.brodsky@arm.com, ritesh.list@gmail.com, yeoreum.yun@arm.com, agordeev@linux.ibm.com, segher@kernel.crashing.org, mark.rutland@arm.com, ryan.roberts@arm.com, pmladek@suse.com, feng.tang@linux.alibaba.com, peterz@infradead.org, kan.liang@linux.intel.com, linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com Cc: Samir M , David Gow , Venkat Rao Bagalkote Subject: [PATCH v5 6/8] powerpc: Prepare for IRQ entry exit Date: Mon, 27 Apr 2026 17:57:40 +0530 Message-ID: <20260427122742.210074-7-mkchauras@gmail.com> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260427122742.210074-1-mkchauras@gmail.com> References: <20260427122742.210074-1-mkchauras@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Mukesh Kumar Chaurasiya Move interrupt entry and exit helper routines from interrupt.h into the PowerPC-specific entry-common.h header as a preparatory step for enabling the generic entry/exit framework. This consolidation places all PowerPC interrupt entry/exit handling in a single common header, aligning with the generic entry infrastructure. The helpers provide architecture-specific handling for interrupt and NMI entry/exit sequences, including: - arch_interrupt_enter/exit_prepare() - arch_interrupt_async_enter/exit_prepare() - arch_interrupt_nmi_enter/exit_prepare() - Supporting helpers such as nap_adjust_return(), check_return_regs_valid(), debug register maintenance, and soft mask handling. The functions are copied verbatim from interrupt.h.Subsequent patches will integrate these routines into the generic entry/exit flow. No functional change intended. Signed-off-by: Mukesh Kumar Chaurasiya Tested-by: Samir M Tested-by: David Gow Tested-by: Venkat Rao Bagalkote Reviewed-by: Shrikanth Hegde --- arch/powerpc/include/asm/entry-common.h | 358 ++++++++++++++++++++++++ 1 file changed, 358 insertions(+) diff --git a/arch/powerpc/include/asm/entry-common.h b/arch/powerpc/include/asm/entry-common.h index ff0625e04778..de5601282755 100644 --- a/arch/powerpc/include/asm/entry-common.h +++ b/arch/powerpc/include/asm/entry-common.h @@ -5,10 +5,75 @@ #include #include +#include #include #include #include +#ifdef CONFIG_PPC_IRQ_SOFT_MASK_DEBUG +/* + * WARN/BUG is handled with a program interrupt so minimise checks here to + * avoid recursion and maximise the chance of getting the first oops handled. + */ +#define INT_SOFT_MASK_BUG_ON(regs, cond) \ +do { \ + if ((user_mode(regs) || (TRAP(regs) != INTERRUPT_PROGRAM))) \ + BUG_ON(cond); \ +} while (0) +#else +#define INT_SOFT_MASK_BUG_ON(regs, cond) +#endif + +#ifdef CONFIG_PPC_BOOK3S_64 +extern char __end_soft_masked[]; +bool search_kernel_soft_mask_table(unsigned long addr); +unsigned long search_kernel_restart_table(unsigned long addr); + +DECLARE_STATIC_KEY_FALSE(interrupt_exit_not_reentrant); + +static inline bool is_implicit_soft_masked(struct pt_regs *regs) +{ + if (user_mode(regs)) + return false; + + if (regs->nip >= (unsigned long)__end_soft_masked) + return false; + + return search_kernel_soft_mask_table(regs->nip); +} + +static inline void srr_regs_clobbered(void) +{ + local_paca->srr_valid = 0; + local_paca->hsrr_valid = 0; +} +#else +static inline unsigned long search_kernel_restart_table(unsigned long addr) +{ + return 0; +} + +static inline bool is_implicit_soft_masked(struct pt_regs *regs) +{ + return false; +} + +static inline void srr_regs_clobbered(void) +{ +} +#endif + +static inline void nap_adjust_return(struct pt_regs *regs) +{ +#ifdef CONFIG_PPC_970_NAP + if (unlikely(test_thread_local_flags(_TLF_NAPPING))) { + /* Can avoid a test-and-clear because NMIs do not call this */ + clear_thread_local_flags(_TLF_NAPPING); + regs_set_return_ip(regs, (unsigned long)power4_idle_nap_return); + } +#endif +} + static __always_inline void booke_load_dbcr0(void) { #ifdef CONFIG_PPC_ADV_DEBUG_REGS @@ -31,6 +96,299 @@ static __always_inline void booke_load_dbcr0(void) #endif } +static inline void booke_restore_dbcr0(void) +{ +#ifdef CONFIG_PPC_ADV_DEBUG_REGS + unsigned long dbcr0 = current->thread.debug.dbcr0; + + if (IS_ENABLED(CONFIG_PPC32) && unlikely(dbcr0 & DBCR0_IDM)) { + mtspr(SPRN_DBSR, -1); + mtspr(SPRN_DBCR0, global_dbcr0[smp_processor_id()]); + } +#endif +} + +static inline void check_return_regs_valid(struct pt_regs *regs) +{ +#ifdef CONFIG_PPC_BOOK3S_64 + unsigned long trap, srr0, srr1; + static bool warned; + u8 *validp; + char *h; + + if (trap_is_scv(regs)) + return; + + trap = TRAP(regs); + // EE in HV mode sets HSRRs like 0xea0 + if (cpu_has_feature(CPU_FTR_HVMODE) && trap == INTERRUPT_EXTERNAL) + trap = 0xea0; + + switch (trap) { + case 0x980: + case INTERRUPT_H_DATA_STORAGE: + case 0xe20: + case 0xe40: + case INTERRUPT_HMI: + case 0xe80: + case 0xea0: + case INTERRUPT_H_FAC_UNAVAIL: + case 0x1200: + case 0x1500: + case 0x1600: + case 0x1800: + validp = &local_paca->hsrr_valid; + if (!READ_ONCE(*validp)) + return; + + srr0 = mfspr(SPRN_HSRR0); + srr1 = mfspr(SPRN_HSRR1); + h = "H"; + + break; + default: + validp = &local_paca->srr_valid; + if (!READ_ONCE(*validp)) + return; + + srr0 = mfspr(SPRN_SRR0); + srr1 = mfspr(SPRN_SRR1); + h = ""; + break; + } + + if (srr0 == regs->nip && srr1 == regs->msr) + return; + + /* + * A NMI / soft-NMI interrupt may have come in after we found + * srr_valid and before the SRRs are loaded. The interrupt then + * comes in and clobbers SRRs and clears srr_valid. Then we load + * the SRRs here and test them above and find they don't match. + * + * Test validity again after that, to catch such false positives. + * + * This test in general will have some window for false negatives + * and may not catch and fix all such cases if an NMI comes in + * later and clobbers SRRs without clearing srr_valid, but hopefully + * such things will get caught most of the time, statistically + * enough to be able to get a warning out. + */ + if (!READ_ONCE(*validp)) + return; + + if (!data_race(warned)) { + data_race(warned = true); + pr_warn("%sSRR0 was: %lx should be: %lx\n", h, srr0, regs->nip); + pr_warn("%sSRR1 was: %lx should be: %lx\n", h, srr1, regs->msr); + show_regs(regs); + } + + WRITE_ONCE(*validp, 0); /* fixup */ +#endif +} + +static inline void arch_interrupt_enter_prepare(struct pt_regs *regs) +{ +#ifdef CONFIG_PPC64 + irq_soft_mask_set(IRQS_ALL_DISABLED); + + /* + * If the interrupt was taken with HARD_DIS clear, then enable MSR[EE]. + * Asynchronous interrupts get here with HARD_DIS set (see below), so + * this enables MSR[EE] for synchronous interrupts. IRQs remain + * soft-masked. The interrupt handler may later call + * interrupt_cond_local_irq_enable() to achieve a regular process + * context. + */ + if (!(local_paca->irq_happened & PACA_IRQ_HARD_DIS)) { + INT_SOFT_MASK_BUG_ON(regs, !(regs->msr & MSR_EE)); + __hard_irq_enable(); + } else { + __hard_RI_enable(); + } + /* Enable MSR[RI] early, to support kernel SLB and hash faults */ +#endif + + if (!regs_irqs_disabled(regs)) + trace_hardirqs_off(); + + if (user_mode(regs)) { + kuap_lock(); + account_cpu_user_entry(); + account_stolen_time(); + } else { + kuap_save_and_lock(regs); + /* + * CT_WARN_ON comes here via program_check_exception, + * so avoid recursion. + */ + if (TRAP(regs) != INTERRUPT_PROGRAM) + CT_WARN_ON(ct_state() != CT_STATE_KERNEL && + ct_state() != CT_STATE_IDLE); + INT_SOFT_MASK_BUG_ON(regs, is_implicit_soft_masked(regs)); + INT_SOFT_MASK_BUG_ON(regs, regs_irqs_disabled(regs) && + search_kernel_restart_table(regs->nip)); + } + INT_SOFT_MASK_BUG_ON(regs, !regs_irqs_disabled(regs) && + !(regs->msr & MSR_EE)); + + booke_restore_dbcr0(); +} + +/* + * Care should be taken to note that arch_interrupt_exit_prepare and + * arch_interrupt_async_exit_prepare do not necessarily return immediately to + * regs context (e.g., if regs is usermode, we don't necessarily return to + * user mode). Other interrupts might be taken between here and return, + * context switch / preemption may occur in the exit path after this, or a + * signal may be delivered, etc. + * + * The real interrupt exit code is platform specific, e.g., + * interrupt_exit_user_prepare / interrupt_exit_kernel_prepare for 64s. + * + * However arch_interrupt_nmi_exit_prepare does return directly to regs, because + * NMIs do not do "exit work" or replay soft-masked interrupts. + */ +static inline void arch_interrupt_exit_prepare(struct pt_regs *regs) +{ + if (user_mode(regs)) { + BUG_ON(regs_is_unrecoverable(regs)); + BUG_ON(regs_irqs_disabled(regs)); + /* + * We don't need to restore AMR on the way back to userspace for KUAP. + * AMR can only have been unlocked if we interrupted the kernel. + */ + kuap_assert_locked(); + + local_irq_disable(); + } +} + +static inline void arch_interrupt_async_enter_prepare(struct pt_regs *regs) +{ +#ifdef CONFIG_PPC64 + /* Ensure arch_interrupt_enter_prepare does not enable MSR[EE] */ + local_paca->irq_happened |= PACA_IRQ_HARD_DIS; +#endif + arch_interrupt_enter_prepare(regs); +#ifdef CONFIG_PPC_BOOK3S_64 + /* + * RI=1 is set by arch_interrupt_enter_prepare, so this thread flags access + * has to come afterward (it can cause SLB faults). + */ + if (cpu_has_feature(CPU_FTR_CTRL) && + !test_thread_local_flags(_TLF_RUNLATCH)) + __ppc64_runlatch_on(); +#endif +} + +static inline void arch_interrupt_async_exit_prepare(struct pt_regs *regs) +{ + /* + * Adjust at exit so the main handler sees the true NIA. This must + * come before irq_exit() because irq_exit can enable interrupts, and + * if another interrupt is taken before nap_adjust_return has run + * here, then that interrupt would return directly to idle nap return. + */ + nap_adjust_return(regs); + + arch_interrupt_exit_prepare(regs); +} + +struct interrupt_nmi_state { +#ifdef CONFIG_PPC64 + u8 irq_soft_mask; + u8 irq_happened; + u8 ftrace_enabled; + u64 softe; +#endif +}; + +static inline bool nmi_disables_ftrace(struct pt_regs *regs) +{ + /* Allow DEC and PMI to be traced when they are soft-NMI */ + if (IS_ENABLED(CONFIG_PPC_BOOK3S_64)) { + if (TRAP(regs) == INTERRUPT_DECREMENTER) + return false; + if (TRAP(regs) == INTERRUPT_PERFMON) + return false; + } + if (IS_ENABLED(CONFIG_PPC_BOOK3E_64)) { + if (TRAP(regs) == INTERRUPT_PERFMON) + return false; + } + + return true; +} + +static inline void arch_interrupt_nmi_enter_prepare(struct pt_regs *regs, + struct interrupt_nmi_state *state) +{ +#ifdef CONFIG_PPC64 + state->irq_soft_mask = local_paca->irq_soft_mask; + state->irq_happened = local_paca->irq_happened; + state->softe = regs->softe; + + /* + * Set IRQS_ALL_DISABLED unconditionally so irqs_disabled() does + * the right thing, and set IRQ_HARD_DIS. We do not want to reconcile + * because that goes through irq tracing which we don't want in NMI. + */ + local_paca->irq_soft_mask = IRQS_ALL_DISABLED; + local_paca->irq_happened |= PACA_IRQ_HARD_DIS; + + if (!(regs->msr & MSR_EE) || is_implicit_soft_masked(regs)) { + /* + * Adjust regs->softe to be soft-masked if it had not been + * reconcied (e.g., interrupt entry with MSR[EE]=0 but softe + * not yet set disabled), or if it was in an implicit soft + * masked state. This makes regs_irqs_disabled(regs) + * behave as expected. + */ + regs->softe = IRQS_ALL_DISABLED; + } + + __hard_RI_enable(); + + /* Don't do any per-CPU operations until interrupt state is fixed */ + + if (nmi_disables_ftrace(regs)) { + state->ftrace_enabled = this_cpu_get_ftrace_enabled(); + this_cpu_set_ftrace_enabled(0); + } +#endif +} + +static inline void arch_interrupt_nmi_exit_prepare(struct pt_regs *regs, + struct interrupt_nmi_state *state) +{ + /* + * nmi does not call nap_adjust_return because nmi should not create + * new work to do (must use irq_work for that). + */ + +#ifdef CONFIG_PPC64 +#ifdef CONFIG_PPC_BOOK3S + if (regs_irqs_disabled(regs)) { + unsigned long rst = search_kernel_restart_table(regs->nip); + + if (rst) + regs_set_return_ip(regs, rst); + } +#endif + + if (nmi_disables_ftrace(regs)) + this_cpu_set_ftrace_enabled(state->ftrace_enabled); + + /* Check we didn't change the pending interrupt mask. */ + WARN_ON_ONCE((state->irq_happened | PACA_IRQ_HARD_DIS) != local_paca->irq_happened); + regs->softe = state->softe; + local_paca->irq_happened = state->irq_happened; + local_paca->irq_soft_mask = state->irq_soft_mask; +#endif +} + static __always_inline void arch_enter_from_user_mode(struct pt_regs *regs) { kuap_lock(); -- 2.53.0