From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 59FC14C74; Mon, 23 Jun 2025 22:01:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750716078; cv=none; b=BdEy0zG5liFGozDwGqI46WiMhjsrX/l7NCiUSJXtxVJsr/v/WxSHFdqIZBI5fxbQ84Np+ONmj/m+km+NlYWCkzUrlbt+1NTxOLeTXcAMCoacBIWpS3mBDf/NdbbYQBbAjPHmgc5B93AwoSZ8rE4IK9/4u25/J3lAqzRGSaABtqo= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750716078; c=relaxed/simple; bh=2QUyV2e4KtEHuiV3yKZq7To2q4V/x43PjFOcDEWGCCQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HPpuTWEAVtrEhs8xGS7RXfQOwnxxBNS8zZfXsWjuRzlXGAetJY35CxDhuW1jaOdmw7nztntxqMqqsBgCPZdiEzPv5B2cVL3d1MBX8YBdEKj8IIKKSbPo7YJQrOzvmVTHxMtYZkzpgWcuQi3kKtJsM4HX8LkHhbUufrKWkDYePNM= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=2sZvo+YV; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="2sZvo+YV" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E746BC4CEED; Mon, 23 Jun 2025 22:01:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1750716078; bh=2QUyV2e4KtEHuiV3yKZq7To2q4V/x43PjFOcDEWGCCQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=2sZvo+YV357Swv0GmfM3Ni/y+QcX8NnujSvOka7dKDOpslpYIyuTywMxV6/RpZ5dI mZyMfnPoR3HCBSBsZoq7jLqW7iuCuRINt+6JjLzYuCmiSKTJNiQJJskxOdOGeJBcfH 3RVpW4QJIPDjkGbdpS+ySO/gjhEsLfqNm4Efk/y4= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Simon Schuster , Andreas Oetken , Dinh Nguyen , Sasha Levin Subject: [PATCH 5.15 301/411] nios2: force update_mmu_cache on spurious tlb-permission--related pagefaults Date: Mon, 23 Jun 2025 15:07:25 +0200 Message-ID: <20250623130641.223790056@linuxfoundation.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250623130632.993849527@linuxfoundation.org> References: <20250623130632.993849527@linuxfoundation.org> User-Agent: quilt/0.68 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 5.15-stable review patch. If anyone has any objections, please let me know. ------------------ From: Simon Schuster [ Upstream commit 2d8a3179ea035f9341b6a73e5ba4029fc67e983d ] NIOS2 uses a software-managed TLB for virtual address translation. To flush a cache line, the original mapping is replaced by one to physical address 0x0 with no permissions (rwx mapped to 0) set. This can lead to TLB-permission--related traps when such a nominally flushed entry is encountered as a mapping for an otherwise valid virtual address within a process (e.g. due to an MMU-PID-namespace rollover that previously flushed the complete TLB including entries of existing, running processes). The default ptep_set_access_flags implementation from mm/pgtable-generic.c only forces a TLB-update when the page-table entry has changed within the page table: /* * [...] We return whether the PTE actually changed, which in turn * instructs the caller to do things like update__mmu_cache. [...] */ int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address, pte_t *ptep, pte_t entry, int dirty) { int changed = !pte_same(*ptep, entry); if (changed) { set_pte_at(vma->vm_mm, address, ptep, entry); flush_tlb_fix_spurious_fault(vma, address); } return changed; } However, no cross-referencing with the TLB-state occurs, so the flushing-induced pseudo entries that are responsible for the pagefault in the first place are never pre-empted from TLB on this code path. This commit fixes this behaviour by always requesting a TLB-update in this part of the pagefault handling, fixing spurious page-faults on the way. The handling is a straightforward port of the logic from the MIPS architecture via an arch-specific ptep_set_access_flags function ported from arch/mips/include/asm/pgtable.h. Signed-off-by: Simon Schuster Signed-off-by: Andreas Oetken Signed-off-by: Dinh Nguyen Signed-off-by: Sasha Levin --- arch/nios2/include/asm/pgtable.h | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/arch/nios2/include/asm/pgtable.h b/arch/nios2/include/asm/pgtable.h index 4a995fa628eef..58208325462cd 100644 --- a/arch/nios2/include/asm/pgtable.h +++ b/arch/nios2/include/asm/pgtable.h @@ -275,4 +275,20 @@ extern void __init mmu_init(void); extern void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *pte); +static inline int pte_same(pte_t pte_a, pte_t pte_b); + +#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS +static inline int ptep_set_access_flags(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, + pte_t entry, int dirty) +{ + if (!pte_same(*ptep, entry)) + set_ptes(vma->vm_mm, address, ptep, entry, 1); + /* + * update_mmu_cache will unconditionally execute, handling both + * the case that the PTE changed and the spurious fault case. + */ + return true; +} + #endif /* _ASM_NIOS2_PGTABLE_H */ -- 2.39.5