From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9531528312F; Mon, 9 Feb 2026 14:52:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770648765; cv=none; b=QndiOtKzunbmQOx+VsL/6zLJAylcGXGzrAZydVEEB49W6fBeVWWVBOXJsJuwaJl0XRekRK7zyM0n52A2fdOD/R6moMHu4HOc3Twvf0JF3CDQCm3ERvSNJN10SkIFfOko61qPP5TqOMI1kMmvzKL2x+4HP2jKQ5lFVDQNj2WbdEI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770648765; c=relaxed/simple; bh=XTX59P6UsPnSKh/pY5slsi6eIpVh6ZE7x5nzecPpZ4k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kdQTY+1kXdgr/JMRgovrBn0KMgKbqVCA7wjKD30C8Y7M2Vr8qcnxR8voPBdqq13/bKnmt6cRpXdn6GsYfRtslxLvy8TeyFRgdUyO/0nxpdAXBH/Iz2NxW5uKO50bM5Ax1DO2+yWgK0Fckog6lNj3WQkd1e7kcLrPh9N1PAOO0mQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=GVfLjk3j; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="GVfLjk3j" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BBF8AC116C6; Mon, 9 Feb 2026 14:52:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1770648765; bh=XTX59P6UsPnSKh/pY5slsi6eIpVh6ZE7x5nzecPpZ4k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GVfLjk3jl+4iqxAeqBBtglBbBVDNDNCUmcLj5o5QXK10g1OYof7jXG7TiGMHs4TSZ /3Ysm7//gONMrq/a+4A6+Bh53VvlDjbk2fqNST0W7jAGKkZwPrLZttxSiHJk5BuJTd gLvrq6J/w4D8iUEOa99iH98MROvEIFaX/oHpn38g= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Ryusuke Konishi , Andrew Cooper , "Borislav Petkov (AMD)" , Alexander Potapenko , Marco Elver , Dmitry Vyukov , Thomas Gleixner , Ingo Molnar , Dave Hansen , "H. Peter Anvin" , Jann Horn , Andrew Morton Subject: [PATCH 5.15 01/75] x86/kfence: fix booting on 32bit non-PAE systems Date: Mon, 9 Feb 2026 15:23:58 +0100 Message-ID: <20260209142301.887390229@linuxfoundation.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260209142301.830618238@linuxfoundation.org> References: <20260209142301.830618238@linuxfoundation.org> User-Agent: quilt/0.69 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 5.15-stable review patch. If anyone has any objections, please let me know. ------------------ From: Andrew Cooper commit 16459fe7e0ca6520a6e8f603de4ccd52b90fd765 upstream. The original patch inverted the PTE unconditionally to avoid L1TF-vulnerable PTEs, but Linux doesn't make this adjustment in 2-level paging. Adjust the logic to use the flip_protnone_guard() helper, which is a nop on 2-level paging but inverts the address bits in all other paging modes. This doesn't matter for the Xen aspect of the original change. Linux no longer supports running 32bit PV under Xen, and Xen doesn't support running any 32bit PV guests without using PAE paging. Link: https://lkml.kernel.org/r/20260126211046.2096622-1-andrew.cooper3@citrix.com Fixes: b505f1944535 ("x86/kfence: avoid writing L1TF-vulnerable PTEs") Reported-by: Ryusuke Konishi Closes: https://lore.kernel.org/lkml/CAKFNMokwjw68ubYQM9WkzOuH51wLznHpEOMSqtMoV1Rn9JV_gw@mail.gmail.com/ Signed-off-by: Andrew Cooper Tested-by: Ryusuke Konishi Tested-by: Borislav Petkov (AMD) Cc: Alexander Potapenko Cc: Marco Elver Cc: Dmitry Vyukov Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Dave Hansen Cc: "H. Peter Anvin" Cc: Jann Horn Cc: Signed-off-by: Andrew Morton Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/kfence.h | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) --- a/arch/x86/include/asm/kfence.h +++ b/arch/x86/include/asm/kfence.h @@ -42,7 +42,7 @@ static inline bool kfence_protect_page(u { unsigned int level; pte_t *pte = lookup_address(addr, &level); - pteval_t val; + pteval_t val, new; if (WARN_ON(!pte || level != PG_LEVEL_4K)) return false; @@ -57,11 +57,12 @@ static inline bool kfence_protect_page(u return true; /* - * Otherwise, invert the entire PTE. This avoids writing out an + * Otherwise, flip the Present bit, taking care to avoid writing an * L1TF-vulnerable PTE (not present, without the high address bits * set). */ - set_pte(pte, __pte(~val)); + new = val ^ _PAGE_PRESENT; + set_pte(pte, __pte(flip_protnone_guard(val, new, PTE_PFN_MASK))); /* * If the page was protected (non-present) and we're making it