From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B25BD22D4C3 for ; Wed, 25 Mar 2026 15:46:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774453565; cv=none; b=NABpmeQneIuUu/+tTBDqo50Eyjn9fQonsHJK4PuhduAL9hySvixORG/p1OhqBizRCOsLbxOMXbI7SWnv+7VrICAfbUHIOaWrWJofkd1Ozsscm3AL7+dIA8mVnbeQGh4FGLhH0g3n2QDRgIPg0KDwwEDRhgkAMwjjxvOF0bDtUWU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774453565; c=relaxed/simple; bh=arMQUtpWwBD53gpZC5IP4lyPB/8JTDtY7XN8FKKRdJw=; h=From:To:Cc:Subject:In-Reply-To:References:Date:Message-ID: MIME-Version:Content-Type; b=VUfK/IJH1mZdbZ19y6FKI4wkI5LUl7d7I9/yV/SGhtiyB2Dv0b1V3y8c7MZ2Vd71VNMn3c46lm7/9jGMmTqOmFp7emb5Nu1bHr3fe49cUadUOHkevAUbandRVjUyQGTDvR7yunMU/gHFHysTsJf2VkKaAvzTQYmnvQs39jnNVEo= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=f7Pvn7FO; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="f7Pvn7FO" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6A706C4CEF7; Wed, 25 Mar 2026 15:46:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774453565; bh=arMQUtpWwBD53gpZC5IP4lyPB/8JTDtY7XN8FKKRdJw=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=f7Pvn7FOqlxQL5CTuUG3hf1FdjCwUJoHZtWXhRQBXygUAT8aAaPSzaqN+LuH9zCt0 ZrHG5KjFAlPW4BGx0wD2hX3P6/4CQDH3VlC6SQ5/2NoVHRoCSJExAt/IjoTjHmJJ1P Ig+Uf3Tzh0Ba5/BGSMMzMjrFIs2F6BrT8132jwo33eb3JBPrz0vYibk6PCTJ5/5FlN PrgcuwOJCwktxtZNEldGT/5XrdyyU6sg4jbWQy/wt3CMOE4nVFagrt95JLOoS+nYbZ XbJ29EGeQqHNyvtXnwITA5yggwUwYnRXnDZKMyt0RKc6UnnnKDgA+MhEFdnpOTCNbo kAkabPaEfIgpw== From: Thomas Gleixner To: Mark Rutland Cc: linux-arm-kernel@lists.infradead.org, ada.coupriediaz@arm.com, catalin.marinas@arm.com, linux-kernel@vger.kernel.org, luto@kernel.org, peterz@infradead.org, ruanjinjie@huawei.com, vladimir.murzin@arm.com, will@kernel.org Subject: Re: [PATCH 1/2] arm64/entry: Fix involuntary preemption exception masking In-Reply-To: References: <20260320113026.3219620-1-mark.rutland@arm.com> <20260320113026.3219620-2-mark.rutland@arm.com> <87eclek0mb.ffs@tglx> <87341ujwl4.ffs@tglx> <87fr5six4d.ffs@tglx> Date: Wed, 25 Mar 2026 16:46:01 +0100 Message-ID: <87ecl7gbeu.ffs@tglx> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain On Wed, Mar 25 2026 at 11:03, Mark Rutland wrote: > On Sun, Mar 22, 2026 at 12:25:06AM +0100, Thomas Gleixner wrote: >> The current sequence on entry is: >> >> // interrupts are disabled by interrupt/exception entry >> enter_from_kernel_mode() >> irqentry_enter(regs); >> mte_check_tfsr_entry(); >> mte_disable_tco_entry(); >> daif_inherit(regs); >> // interrupts are still disabled > > That last comment isn't quite right: we CAN and WILL enable interrupts > in local_daif_inherit(), if and only if they were enabled in the context > the exception was taken from. Ok. > As mentioned above, when handling an interrupt (rather than a > synchronous exception), we don't use local_daif_inherit(), and instead > use a different DAIF function to unmask everything except interrupts. > >> which then becomes: >> >> // interrupts are disabled by interrupt/exception entry >> irqentry_enter(regs) >> establish_state(); >> // RCU is watching >> arch_irqentry_enter_rcu() >> mte_check_tfsr_entry(); >> mte_disable_tco_entry(); >> daif_inherit(regs); >> // interrupts are still disabled >> >> Which is equivalent versus the MTE/DAIF requirements, no? > > As above, we can't use local_daif_inherit() here because we want > different DAIF masking behavior for entry to interrupts and entry to > synchronous exceptions. While we could pass some token around to > determine the behaviour dynamically, that's less clear, more > complicated, and results in worse code being generated for something we > know at compile time. I get it. Duh what a maze. > If we can leave DAIF masked early on during irqentry_enter(), I don't > see why we can't leave all DAIF exceptions masked until the end of > irqentry_enter(). Yes. Entry is not an issue. > I *think* what would work for us is we could split some of the exit > handling (including involuntary preemption) into a "prepare" step, as we > have for return to userspace. That way, arm64 could handle exiting > something like: > > local_irq_disable(); > irqentry_exit_prepare(); // new, all generic logic > local_daif_mask(); > arm64_exit_to_kernel_mode() { > ... > irqentry_exit(); // ideally irqentry_exit_to_kernel_mode(). > ... > } > > ... and other architectures can use a combined exit_to_kernel_mode() (or > whatever we call that), which does both, e.g. > > // either noinstr, __always_inline, or a macro > void irqentry_prepare_and_exit(void) That's a bad idea as that would require to do a full kernel rename of all existing irqentry_exit() users. > { > irqentry_exit_prepare(); > irqentry_exit(); > } Aside of the naming that should work. Thanks, tglx